问题一:yarn.containers.vcores使用有问题?
使用-yD yarn.containers.vcores=4 区设置flink任务的总的cpu核数,看到任务配置参数也生效了 但是实际申请核数还是按照 cpu slot一比一申请的 各位大佬使用yarn.containers.vcores是不是还需要开启yarn的cpu 调度*来自志愿者整理的flink邮件归档
参考回答:
你的 flink 是什么版本?
部署模式是 per-job 还是 session?
“看到任务配置参数也生效了”具体是在哪里看到的?
关于本问题的更多回答可点击原文查看:https://developer.aliyun.com/ask/359549?spm=a2c6h.13262185.0.0.133c39c0Clkic3
问题二:flink sql中如何使用异步io关联维表?
flink sql中如何使用异步io关联维表?官网文档有介绍么?*来自志愿者整理的flink邮件归档
参考回答:
定义一个 sourcetable
关于本问题的更多回答可点击原文查看:https://developer.aliyun.com/ask/359626?spm=a2c6h.13262185.0.0.7c83253fTqut4U
问题三:Re: [DISCUSS] Deprecation and removal of the legacy SQL planner
Last call for feedback on this topic.
It seems everyone agrees to finally complete FLIP-32. Since FLIP-32 has been accepted for a very long time, I think we don't need another voting thread for executing the last implementation step. Please let me know if you think differently.
I will start deprecating the affected classes and interfaces beginning of next week.*来自志愿者整理的flink邮件归档
关于本问题的更多回答可点击原文查看:https://developer.aliyun.com/ask/359628?spm=a2c6h.13262185.0.0.7c83253fTqut4U
问题四:如何动态配置 flink run 的 client日志文件的路径?
Hi all.
通过flink run提交Flink作业,flink client产生的日志文件默认是在 $FLINK_HOME/log 下。
需要将每个作业提交产生的日志分别放到不同的目录下,那么请问如何动态指定每次flink run的日志文件的路径呢?
附:
- 通过设置 env.log.dir 配置项的值,在 flink-conf.yaml 文件中会生效,但通过 -yD 或 -D 的方式动态指定的话, it doesn't seem to work
- flink version: 1.10*来自志愿者整理的flink邮件归档
参考回答:
你在运行flink run命令以前export一下FLINK_LOG_DIR应该就可以的
关于本问题的更多回答可点击原文查看:https://developer.aliyun.com/ask/359629?spm=a2c6h.13262185.0.0.7c83253fTqut4U
问题五:flink-savepoin有些使用问题?
java.lang.Exception: Could not materialize checkpoint 2404 for operator KeyedProcess (21/48). at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.handleExecutionException(StreamTask.java:1100) at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:1042) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: Key group 0 is not in KeyGroupRange{startKeyGroup=54, endKeyGroup=55}. at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at org.apache.flink.runtime.concurrent.FutureUtils.runIfNotDoneAndGet(FutureUtils.java:450) at org.apache.flink.streaming.api.operators.OperatorSnapshotFinalizer.<init>(OperatorSnapshotFinalizer.java:47) at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:1011) ... 3 more Caused by: java.lang.IllegalArgumentException: Key group 0 is not in KeyGroupRange{startKeyGroup=54, endKeyGroup=55}. at org.apache.flink.runtime.state.KeyGroupRangeOffsets.computeKeyGroupIndex(KeyGroupRangeOffsets.java:142) at org.apache.flink.runtime.state.KeyGroupRangeOffsets.setKeyGroupOffset(KeyGroupRangeOffsets.java:104) at org.apache.flink.contrib.streaming.state.snapshot.RocksFullSnapshotStrategy$SnapshotAsynchronousPartCallable.writeKVStateData(RocksFullSnapshotStrategy.java:314) at org.apache.flink.contrib.streaming.state.snapshot.RocksFullSnapshotStrategy$SnapshotAsynchronousPartCallable.writeSnapshotToOutputStream(RocksFullSnapshotStrategy.java:256) at org.apache.flink.contrib.streaming.state.snapshot.RocksFullSnapshotStrategy$SnapshotAsynchronousPartCallable.callInternal(RocksFullSnapshotStrategy.java:221) at org.apache.flink.contrib.streaming.state.snapshot.RocksFullSnapshotStrategy$SnapshotAsynchronousPartCallable.callInternal(RocksFullSnapshotStrategy.java:174) at org.apache.flink.runtime.state.AsyncSnapshotCallable.call(AsyncSnapshotCallable.java:75) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.apache.flink.runtime.concurrent.FutureUtils.runIfNotDoneAndGet(FutureUtils.java:447) ... 5 more*来自志愿者整理的flink邮件归档
参考回答:
对于 keyed state,需要保证同一个 key 在 同一个 keygroup 中,如果是某个 key 有热点,可以在 keyby 之前进行一次 map(在 key 后面拼接一些 后缀),然后 keyby,最后处理完成之后,将这些进行聚合
关于本问题的更多回答可点击原文查看:https://developer.aliyun.com/ask/359633?spm=a2c6h.13262185.0.0.7c83253fTqut4U