Now I am optimizing the flink 1.6 task memory configuration. I see the source code, at first, the flink task config the cut-off memory, cut-off memory = Math.max(600,containerized.heap-cutoff-ratio * TaskManager Memory), containerized.heap-cutoff-ratio default value is 0.25. For example, if TaskManager Memory is 4G, cut-off memory is 1 G.
However, I set the taskmanager's gc.log, I find the metaspace only used 60 MB. I personally feel that the memory configuration of cut-off is a little too large. Can this cut-off memory configuration be reduced, like making the containerized.heap-cutoff-ratio be 0.15. Is there any problem for this config?
I am looking forward to your reply.*来自志愿者整理的flink邮件归档
The container cut-off accounts for not only metaspace, but also native memory footprint such as thread stack, code cache, compressed class space. If you run streaming jobs with rocksdb state backend, it also accounts for the rocksdb memory usage.
The consequence of less cut-off depends on your environment and workloads. For standalone clusters, the cut-off will not take any effect. For containerized environments, depending on Yarn/Mesos configurations your container may or may not get killed due to exceeding the container memory.
Thank you~*来自志愿者整理的FLINK邮件归档
版权声明:本文内容由阿里云实名注册用户自发贡献,版权归原作者所有,阿里云开发者社区不拥有其著作权,亦不承担相应法律责任。具体规则请查看《阿里云开发者社区用户服务协议》和《阿里云开发者社区知识产权保护指引》。如果您发现本社区中有涉嫌抄袭的内容,填写侵权投诉表单进行举报,一经查实,本社区将立刻删除涉嫌侵权内容。