开发者社区> 问答> 正文

Question about the flink 1.6 memory config?

Now I am optimizing the flink 1.6 task memory configuration. I see the  source code, at first, the flink task config the cut-off memory, cut-off  memory = Math.max(600,containerized.heap-cutoff-ratio * TaskManager  Memory), containerized.heap-cutoff-ratio default value is 0.25. For  example, if TaskManager Memory is 4G, cut-off memory is 1 G. 

However, I set the taskmanager's gc.log, I find the metaspace only used 60  MB. I personally feel that the memory configuration of cut-off is a little  too large. Can this cut-off memory configuration be reduced, like making  the containerized.heap-cutoff-ratio be 0.15.  Is there any problem for this config? 

I am looking forward to your reply.*来自志愿者整理的flink邮件归档

展开
收起
塔塔塔塔塔塔 2021-12-02 17:52:24 1074 0
1 条回答
写回答
取消 提交回答
  • The container cut-off accounts for not only metaspace, but also native  memory footprint such as thread stack, code cache, compressed class space.  If you run streaming jobs with rocksdb state backend, it also accounts for  the rocksdb memory usage. 

    The consequence of less cut-off depends on your environment and workloads.  For standalone clusters, the cut-off will not take any effect. For  containerized environments, depending on Yarn/Mesos configurations your  container may or may not get killed due to exceeding the container memory. 

    Thank you~*来自志愿者整理的FLINK邮件归档

    2021-12-02 18:13:39
    赞同 展开评论 打赏
问答排行榜
最热
最新

相关电子书

更多
Flink CDC Meetup PPT - 龚中强 立即下载
Flink CDC Meetup PPT - 王赫 立即下载
Flink CDC Meetup PPT - 覃立辉 立即下载