问题一:flink-1.11.2 的 内存溢出问题
使用的是rocksdb, 并行度是5,1个tm, 5个slot,tm 内存给 10G,启动任务报下面的错误。之前有启动成功过,运行一段时间后,也是报内存溢出,然后接成原来的offset启动任务,直接启动不起来了。
2020-11-16 17:44:52 java.lang.OutOfMemoryError: Direct buffer memory. The direct out-of-memory error has occurred. This can mean two things: either job(s) require(s) a larger size of JVM direct memory or there is a direct memory leak. The direct memory can be allocated by user code or some of its dependencies. In this case 'taskmanager.memory.task.off-heap.size' configuration option should be increased. Flink framework and its dependencies also consume the direct memory, mostly for network communication. The most of network memory is managed by Flink and should not result in out-of-memory error. In certain special cases, in particular for jobs with high parallelism, the framework may require more direct memory which is not managed by Flink. In this case 'taskmanager.memory.framework.off-heap.size' configuration option should be increased. If the error persists then there is probably a direct memory leak in user code or some of its dependencies which has to be investigated and fixed. The task executor has to be shutdown... at java.nio.Bits.reserveMemory(Bits.java:658) at java.nio.DirectByteBuffer. (DirectByteBuffer.java:123) at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:174) at sun.nio.ch.IOUtil.read(IOUtil.java:195) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at org.apache.flink.kafka011.shaded.org.apache.kafka.common.network.PlaintextTransportLayer.read(PlaintextTransportLayer.java:109) at org.apache.flink.kafka011.shaded.org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:101) at org.apache.flink.kafka011.shaded.org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:75) at org.apache.flink.kafka011.shaded.org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:203) at org.apache.flink.kafka011.shaded.org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:167) at org.apache.flink.kafka011.shaded.org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:381) at org.apache.flink.kafka011.shaded.org.apache.kafka.common.network.Selector.poll(Selector.java:326) at org.apache.flink.kafka011.shaded.org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:433) at org.apache.flink.kafka011.shaded.org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:232) at org.apache.flink.kafka011.shaded.org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:208) at org.apache.flink.kafka011.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1096) at org.apache.flink.kafka011.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043) at org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerThread.getRecordsFromKafka(KafkaConsumerThread.java:535) at org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerThread.run(KafkaConsumerThread.java:264) *来自志愿者整理的flink邮件归档
参考答案:
是什么部署模式呢?standalone? 之前任务运行一段时间报错之后,重新运行的时候是所有 TM 都重启了吗?还是有复用之前的 TM?
*来自志愿者整理的flink邮件归档
关于本问题的更多回答可点击进行查看:
https://developer.aliyun.com/ask/364559?spm=a2c6h.13066369.question.8.6ad26382re9j5Q
问题二:native K8s模式下,pod系统占用内存
在native K8s模式下,创建的JM Pod、TM Pod,看代码中将flink process memor分配给了Pod可使用的资源。
我理解,pod中不止JVM进程,还可能有其他内存占用,例如Linux系统使用内存。
所以我有个疑问是pod系统占用多少内存*来自志愿者整理的flink邮件归档
参考答案:
不太理解你说的Pod系统占用多少内存是什么意思,Pod并不是虚拟机,而是docker container来进行的轻量虚拟化
和宿主机是共用内核的,本身不会带来额外的内存开销
至于Pod的内存设置,你说的是对的。Pod的limit并不是和JVM的heap内存相等的,因为还有offheap的内存以及JVM的overhead
所以你会看到JVM的参数并不是和Pod的limit完全相等。Pod的limit是根据TaskManager能够使用的最大内存来设置的,具体
每部分的内存配置和你可以参考社区文档[1].
[1].
https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_setup_tm.html
关于本问题的更多回答可点击进行查看:
https://developer.aliyun.com/ask/364557?spm=a2c6h.13066369.question.7.6ad26382lAm5hX
问题三:Blink 1.11 create view是不是没有办法把rowtime带下去?
通过table api的// declare an additional logical field as an event time attribute
Tabletable=tEnv.fromDataStream(stream,$("user_name"),$("data"),$("user_action_time").rowtime()");
可以把eventtime往后传, 如果使用createview的话怎么把这个time attribute往后带吗?
不往后传的话可能会
这个有什么方法吗?*来自志愿者整理的flink邮件归档
参考答案:
你是指的 createTemporaryView 这个方法吗,这个方法上也可以指定字段,例子可以查看[1]。 其中 createTemporaryView 的实现也是间接调用了 fromDataStream 方法[2]。
[1] https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/common.html#create-a-view-from-a-datastream-or-dataset [2] https://github.com/apache/flink/blob/c24185d1c2853d5c56eed6c40e5960d2398474ca/flink-table/flink-table-api-java-bridge/src/main/java/org/apache/flink/table/api/bridge/java/internal/StreamTableEnvironmentImpl.java#L253
*来自志愿者整理的flink邮件归档
关于本问题的更多回答可点击进行查看:
https://developer.aliyun.com/ask/364556?spm=a2c6h.13066369.question.10.6ad26382BOYx9J
问题四:Flink sql 无法用!=
我想在where条件下用不等号报错,难道flink sql不等号不是!=这个吗? [ERROR] Could not execute SQL statement. Reason: org.apache.calcite.runtime.CalciteException: Bang equal '!=' is not allowed under the current SQL conformance level*来自志愿者整理的flink邮件归档
参考答案:
是的 <> 是 SQL 标准推荐的用法。
*来自志愿者整理的flink邮件归档
关于本问题的更多回答可点击进行查看:
https://developer.aliyun.com/ask/364555?spm=a2c6h.13066369.question.11.6ad26382ssN73H
问题五:flink 1.11.2 如何获取blink计划下的BatchTableEnvironment对象
你好!
我使用的是flink 1.11.2版本,官网的文档中说明blink的batch执行环境以如下方式获取:
// ****************** // BLINK BATCH QUERY // ****************** import org.apache.flink.table.api.EnvironmentSettings; import org.apache.flink.table.api.TableEnvironment; EnvironmentSettings bbSettings = EnvironmentSettings.newInstance().useBlinkPlanner().inBatchMode().build(); TableEnvironment bbTableEnv = TableEnvironment.create(bbSettings);除过上述的方法之外,是否还有其他方式获取到blink的batch执行环境?而我需要的是BatchTableEnvironment环境,该如何获取?*来自志愿者整理的flink邮件归档
参考答案:
是说 BatchTableEnvironment 对象吗
*来自志愿者整理的flink邮件归档
关于本问题的更多回答可点击进行查看:
https://developer.aliyun.com/ask/364553?spm=a2c6h.13066369.question.12.6ad26382FZqgkr