admin管理员组

文章数量:1124396

I currently have a RedPanda connect configuration bundle as follows:

  • Main config file that does not change at runtime
  • Resource files that include deployment specific information that may be modified at runtime.
  • A template that updates the config based on values in the resource files

This is run through rpk connect run with the -w flag so it monitors the files for changes.

The input source is a bunch of kinesis streams and I'm using the aws_kinesis input connector for this. The output is a bunch of Kafka (redpanda) streams and I'm using the kafka output connector for this. The topic & message key (for partitioning) are based on data in the message.

The issue is that I'm currently running on a single host as part of my proof of concept, but this isn't going to scale to the volume I need to manage and I need to move it to a cluster such that messages from kinesis are consumed & processed across multiple hosts and then written to the kafka stream.

I understand how to move a single config file into kubernetes, and also how to move multiple config files for different hosts, but how do I handle the case above where the config may change at runtime and I rely on the -w flag to update the running connector when that happens? Additionally, how do I make sure that all hosts read from the same streams to split the load?

本文标签: apache kafkaHow to go from redpanda standalone to kubernetes clusterStack Overflow