Hi Experts, I have my flink application running on Kubernetes, initially with 1 Job Manager, and 2 Task Managers.
Then we have the custom operator that watches for the CRD, when the CRD replicas changed, it will patch the Flink Job Manager deployment parallelism and max parallelism according to the replicas from CRD (parallelism can be configured via env variables for our application). which causes the job manager restart. hence a new Flink job. But the consumer group does not change, so it will continue from the offset where it left.
In addition, operator will also update Task Manager's deployment replicas, and will adjust the pod number.
In case of scale up, the existing task manager pods do not get killed, but new task manager pods will be created.
And we observed a skew in the partition offset consumed. e.g. some partitions have huge lags and other partitions have small lags. (observed from burrow)
This is also validated by the metrics from Flink UI, showing the throughput differs for slotss
Any clue why this is the case?*来自志愿者整理的flink邮件归档
I have a few more questions regarding your issue.
I suspect the performance difference might be an outcome of some warming up issues. E.g., the existing TMs might have some file already localized, or some memory buffers already promoted to the JVM tenured area, while the new TMs have not.*来自志愿者整理的FLINK邮件归档
版权声明:本文内容由阿里云实名注册用户自发贡献,版权归原作者所有,阿里云开发者社区不拥有其著作权,亦不承担相应法律责任。具体规则请查看《阿里云开发者社区用户服务协议》和《阿里云开发者社区知识产权保护指引》。如果您发现本社区中有涉嫌抄袭的内容,填写侵权投诉表单进行举报,一经查实,本社区将立刻删除涉嫌侵权内容。