admin管理员组文章数量:1124396
I currently have a RedPanda connect configuration bundle as follows:
- Main config file that does not change at runtime
- Resource files that include deployment specific information that may be modified at runtime.
- A template that updates the config based on values in the resource files
This is run through rpk connect run
with the -w
flag so it monitors the files for changes.
The input
source is a bunch of kinesis
streams and I'm using the aws_kinesis
input connector for this. The output is a bunch of Kafka (redpanda) streams and I'm using the kafka
output connector for this. The topic & message key (for partitioning) are based on data in the message.
The issue is that I'm currently running on a single host as part of my proof of concept, but this isn't going to scale to the volume I need to manage and I need to move it to a cluster such that messages from kinesis are consumed & processed across multiple hosts and then written to the kafka stream.
I understand how to move a single config file into kubernetes, and also how to move multiple config files for different hosts, but how do I handle the case above where the config may change at runtime and I rely on the -w
flag to update the running connector when that happens? Additionally, how do I make sure that all hosts read from the same streams to split the load?
本文标签: apache kafkaHow to go from redpanda standalone to kubernetes clusterStack Overflow
版权声明:本文标题:apache kafka - How to go from redpanda standalone to kubernetes cluster? - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1736622712a1945610.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论