Flink报错问题之Sql往kafka表写聚合数据报错如何解决

本文涉及的产品
实时计算 Flink 版,5000CU*H 3个月
简介: Apache Flink是由Apache软件基金会开发的开源流处理框架,其核心是用Java和Scala编写的分布式流数据流引擎。本合集提供有关Apache Flink相关技术、使用技巧和最佳实践的资源。

问题一:Flink操作 kafka ,hive遇到kerberos过期

请问下 Flink操作 kafka ,hive遇到kerberos过期 有什么解决方法吗?*来自志愿者整理的flink邮件归档



参考答案:

遇到kerberos过期问题,应该是你使用的是ticket cache,而不是keytab文件做认证*来自志愿者整理的flink邮件归档



关于本问题的更多回答可点击进行查看:

https://developer.aliyun.com/ask/371487?spm=a2c6h.13066369.question.28.6ad26382xTCGZ5



问题二:flink 1.11 streaming 写入hive 5min表相关问题

现象: CREATE TABLE test.xxx_5min (

......

) PARTITIONED BY (dt string , hm string) stored as orc TBLPROPERTIES(

'sink.partition-commit.trigger'='process-time',

'sink.partition-commit.delay'='5 min',

'sink.partition-commit.policy.kind'='metastore,success-file',

'sink.rolling-policy.file-size'='128MB',

'sink.rolling-policy.check-interval' ='30s',

'sink.rolling-policy.rollover-interval'='5min'

); dt = FROM_UNIXTIME((UNIX_TIMESTAMP()/300 * 300) ,'yyyyMMdd') hm = FROM_UNIXTIME((UNIX_TIMESTAMP()/300 * 300) ,'HHmm') 5min产生一个分区, ,checkpoint频率:30s 问题: 1.flink 1.11 steaming写入为什么是1min产生一个文件,而且文件大小没有到128M,如果参数sink.rolling-policy.rollover-interval'='5min 文件滚动时间 5min 滚动大小128M生效的话,就不应该产生这么小的问题,文件大小没有按照预期控制,为啥? 2.小文件问题该如何解决?有什么好的思路 3. 标记文件_Success文件为啥上报延迟? 如果是 12:30的分区,5min的分区,理论上应该12:35左右的时候就应该提交partition?*来自志愿者整理的flink邮件归档



参考答案:

Hi,

1.checkpoint会强制滚动 2.目前最简单的思路是加大checkpoint interval,另一个思路是在partition commit时触发hive去compaction。 3.success文件的生成依赖checkpoint interval,所以会有一定延迟。*来自志愿者整理的flink邮件归档



关于本问题的更多回答可点击进行查看:

https://developer.aliyun.com/ask/371488?spm=a2c6h.13066369.question.29.6ad26382ER6VgF



问题三:Sql往kafka表写聚合数据报错

如下,想用sql直接往kafka写聚合结果,版本是1.11,请问能有什么办法解决,还是只能转换成datastream??

谢谢

Exception in thread "main" org.apache.flink.table.api.TableException: Table sink 'default_catalog.default_database.mvp_rtdwb_user_business' doesn't support consuming update changes which is produced by node GroupAggregate(groupBy=[dt, user_id], select=[dt, user_id, SUM($f2) AS text_feed_count, SUM($f3) AS picture_feed_count, SUM($f4) AS be_comment_forward_user_count, SUM($f5) AS share_link_count, SUM($f6) AS share_music_count, SUM($f7) AS share_video_count, SUM($f8) AS follow_count, SUM($f9) AS direct_post_count, SUM($f10) AS comment_post_count, SUM($f11) AS comment_count, SUM($f12) AS fans_count, MAX(event_time) AS event_time])*来自志愿者整理的flink邮件归档



参考答案:

你可以用Canal或者Debezium format来写入kafka,那样就支持update和delete消息了。*来自志愿者整理的flink邮件归档



关于本问题的更多回答可点击进行查看:

https://developer.aliyun.com/ask/371489?spm=a2c6h.13066369.question.28.6ad26382qHVrtv



问题四:使用datagen connector生成无界数据,一秒时间窗口的聚合操作,一直没有数据输出打印

以下程序运行,控制台一直没有数据输出1. 程序package kafka;

import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; import org.apache.flink.table.api.EnvironmentSettings; import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;

public class DataGenTest {

public static void main(String[] args) {

StreamExecutionEnvironment bsEnv = StreamExecutionEnvironment.getExecutionEnvironment(); EnvironmentSettings bsSettings = EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build(); StreamTableEnvironment bsTableEnv = StreamTableEnvironment.create(bsEnv, bsSettings);

String sourceTableDDL = "CREATE TABLE datagen ( " + " f_random INT, " + " f_random_str STRING, " + " ts AS localtimestamp, " + " WATERMARK FOR ts AS ts " + ") WITH ( " + " 'connector' = 'datagen', " + " 'rows-per-second'='20', " + " 'fields.f_random.min'='1', " + " 'fields.f_random.max'='10', " + " 'fields.f_random_str.length'='10' " + ")";

bsTableEnv.executeSql(sourceTableDDL);

bsTableEnv.executeSql("SELECT f_random, count(1) " + "FROM datagen " + "GROUP BY TUMBLE(ts, INTERVAL '1' second), f_random").print();

}

}2. 控制台,log4j:WARN No appenders could be found for logger (org.apache.flink.table.module.ModuleManager). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. +-------------+----------------------+ | f_random | EXPR$1 | +-------------+----------------------+*来自志愿者整理的flink邮件归档



参考答案:

TableResult.print() 方法目前只支持了 exactly-once 语义,在 streaming 模式下必须设置checkpoint才能work, 你配置下checkpoint之后再试下,支持 At Least Once 的方法在1.12里应该会支持,支持后可以不用设置 checkpoint。*来自志愿者整理的flink邮件归档



关于本问题的更多回答可点击进行查看:

https://developer.aliyun.com/ask/371490?spm=a2c6h.13066369.question.29.6ad26382YCtpeE



问题五:flink1.11.1启动问题

首先,flink1.9提交到yarn集群是没有问题的,小组的配置提交flink1.11.1到yarn集群就报下面的错误 2020-07-27 17:08:14,661 信息 org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] ------------------------- -------------------------------------------------- ----- 2020-07-27 17:08:14,665 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - 启动 YarnJobClusterEntrypoint(版本:1.11.1,Scala:2.11,修订版:7eb514a,日期:2015-07:20) 02:09+02:00) 2020-07-27 17:08:14,665 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - 操作系统当前用户:hadoop 2020-07-27 17:08:15,417 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - 当前 Hadoop/Kerberos 用户:wangty 2020-07-27 17:08:15,418 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - JVM:Java HotSpot(TM) 64 位服务器 VM - Oracle Corporation - 1.8/25.191-b12 2020-07-27 17:08:15,418 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - 最大堆大小:429 MiBytes 2020-07-27 17:08:15,418 信息 org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - JAVA_HOME:/usr/local/jdk/ 2020-07-27 17:08:15,419 信息 org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Hadoop 版本:2.7.7 2020-07-27 17:08:15,419 信息 org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - JVM 选项: 2020-07-27 17:08:15,419 信息 org.apache.flink.runtime.entrypoint.ClusterEntrypoint []--Xmx469762048 2020-07-27 17:08:15,419 信息 org.apache.flink.runtime.entrypoint.ClusterEntrypoint []--Xms469762048 2020-07-27 17:08:15,419 信息 org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -XX:MaxMetaspaceSize=268435456 2020-07-27 17:08:15,419 信息 org.apache.flink.runtime.entrypoint.ClusterEntrypoint []--Dlog.file=/data/emr/yarn/logs/application_1568724479991_18850539/container_1470905_149095_1470907090708070802000000 2020-07-27 17:08:15,419 信息 org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Dlog4j.configuration=file:log4j.properties 2020-07-27 17:08:15,419 信息 org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Dlog4j.configurationFile=file:log4j.properties 2020-07-27 17:08:15,419 信息 org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - 程序参数:(无) 2020-07-27 17:08:15,419 信息 org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - 类路径::lib/flink-csv-1.11.1.jar:lib/flink-json-1.11.1。 jar:lib/flink-shaded-zookeeper-3.4.14.jar:lib/flink-table-blink_2.11-1.11.1.jar:lib/flink-table_2.11-1.11.1.jar:lib/log4j- 1.2-api-2.12.1.jar:lib/log4j-api-2.12.1.jar:lib/log4j-core-2.12.1.jar:lib/log4j-slf4j-impl-2.12.1.jar:test。 jar:flink-dist_2.11-1.11.1.jar:job.graph:flink-conf.yaml::/usr/local/service/hadoop/etc/hadoop:/usr/local/service/hadoop/share/hadoop /common/hadoop-nfs-2.7.3.jar:/usr/local/service/hadoop/share/hadoop/common/hadoop-common-2.7.3.jar:/usr/local/service/hadoop/share/hadoop /common/hadoop-common-2.7.3-tests.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/service/hadoop/share /hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/service/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/service/hadoop/share/hadoop/common/lib/commons-collections- 3.2.2.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/service/hadoop/share/hadoop/common/ lib/commons-math3-3.1.1.jar:/usr/local/service/hadoop/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/usr/local/service/hadoop/share/ hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/service/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/service/hadoop/ share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/service/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/service/hadoop/ share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/service/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/service/hadoop/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jets3t-0.9.0. jar:/usr/local/service/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jackson-core- asl-1.9.13.jar:/usr/local/service/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/service/hadoop/share/hadoop/common/lib/ xmlenc-0.52.jar:/usr/local/service/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/service/hadoop/share/hadoop/common/lib/ netty-3.6.2.Final.jar:/usr/local/service/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/service/hadoop/share/hadoop/ common/lib/protobuf-java-2.5.0.jar:/usr/local/service/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/service/hadoop/share/hadoop/ common/lib/commons-net-3.1。jar:/usr/local/service/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/service/hadoop/share/hadoop/common/lib/api-asn1-api-1.0. 0-M20.jar:/usr/local/service/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/service/hadoop/share/hadoop/common/lib/slf4j-api- 1.7.10.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jackson- core-2.2.3.jar:/usr/local/service/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/service/hadoop/share/hadoop/common/ lib/jackson-databind-2.2.3.jar:/usr/local/service/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/service/hadoop/share/hadoop/ common/lib/log4j-1.2.17.jar:/usr/local/service/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/service/hadoop/share/hadoop/common/ lib/jackson-annotations-2.2.3.jar:/usr/local/service/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/service/hadoop/share/hadoop/common/lib/curator-client-2.7. 1.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/service/hadoop/share/hadoop/common/lib/gson-2.2。 4.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/service/hadoop/share/hadoop/common/lib/java- xmlbuilder-0.4.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/service/hadoop/share/hadoop/common/lib/ curator-recipes-2.7.1.jar:/usr/local/service/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/service/hadoop/share/ hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/service/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/service/hadoop/share/ hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/service/hadoop/share/hadoop/common/lib/guava-11.0.2.jar: /usr/local/service/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/service/hadoop/share/hadoop/common/lib/servlet-api-2.5. jar:/usr/local/service/hadoop/share/hadoop/common/lib/hadoop-temrfs-1.0.6.jar:/usr/local/service/hadoop/share/hadoop/common/lib/commons-codec- 1.4.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jettison- 1.1.jar:/usr/local/service/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/service/hadoop/share/hadoop/common/lib/htrace-core-3.1。 0-incubating.jar:/usr/local/service/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jersey-核心1.9.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9. 13.jar:/usr/local/service/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/service/hadoop/share/hadoop/common/lib/commons-beanutils- 1.7.0.jar:/usr/local/service/hadoop/share/hadoop/common/lib/joda-time-2.9.7.jar:/usr/local/service/hadoop/share/hadoop/common/lib/ commons-io-2.4.jar:/usr/local/service/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/service/hadoop/share/hadoop/hdfs/hadoop- hdfs-2.7.3-tests.jar:/usr/local/service/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/usr/local/service/hadoop/share/hadoop/ hdfs/hadoop-hdfs-2.7.3.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9。 1.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/netty-3.6.2。 final.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/protobuf- java-2.5.0.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/ log4j-1.2.17.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/service/hadoop/share/hadoop/hdfs/ lib/asm-3.2.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/service/hadoop/share/hadoop/hdfs/ lib/xml-apis-1.3.04.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar: /usr/local/service/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar: /usr/local/service/hadoop/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/commons-lang- 2.6.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/leveldbjni-all- 1.8.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/ commons-cli-1.2.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/ netty-all-4.0.23.Final.jar:/usr/local/service/hadoop/share/hadoop/yarn/spark-2.0.2-yarn-shuffle.jar:/usr/local/service/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice- 2.7.3.jar:/usr/local/service/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/usr/local/service/hadoop/share/hadoop/ yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/usr/local/service/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/usr/local/service/ hadoop/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/usr/local/service/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/ usr/local/service/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/usr/local/service/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7 .3.jar:/usr/local/service/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/usr/local/service/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/usr/local/service/hadoop/share/hadoop/yarn/hadoop-yarn-server-common- 2.7.3.jar:/usr/local/service/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/usr/local/service/hadoop/share/hadoop/yarn/ hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/service/hadoop/share/ hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/service/hadoop/share/ hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/ service/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar :/usr/local/service/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/commons-logging-1.1. 3.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/xz- 1.0.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/guice-3.0.jar: /usr/local/service/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2. jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/asm-3.2.jar: /usr/local/service/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2. jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/zookeeper-3.4。 6.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/hadoop-lzo-0.4.20.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/jsr305- 3.0.0.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/jackson- jaxrs-1.9.13.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/ commons-codec-1.4.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/service/hadoop/share/hadoop/yarn/ lib/jettison-1.1.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/ guice-servlet-3.0.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar: /usr/local/service/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar: /usr/local/service/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/commons-cli- 1.2.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar/usr/local/service/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar/usr/local/service/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar 2020-07-27 17:08:15,420 信息 org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] ------------------------- -------------------------------------------------- ----- 2020-07-27 17:08:15,421 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - 为 [TERM、HUP、INT] 注册的 UNIX 信号处理程序 2020-07-27 17:08:15,424 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - YARN 守护进程运行为:wangty Yarn 客户端用户获取者:wangty 2020-07-27 17:08:15,427 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:taskmanager.memory.process.size, 1728m 2020-07-27 17:08:15,427 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:internal.jobgraph-path、job.graph 2020-07-27 17:08:15,427 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:jobmanager.execution.failover-strategy,区域 2020-07-27 17:08:15,427 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:high-availability.cluster-id,application_1568724479991_18850539 2020-07-27 17:08:15,428 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:jobmanager.rpc.address,本地主机 2020-07-27 17:08:15,428 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:execution.target、yarn-per-job 2020-07-27 17:08:15,428 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:jobmanager.memory.process.size, 1 gb 2020-07-27 17:08:15,428 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:jobmanager.rpc.port, 6123 2020-07-27 17:08:15,428 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:execution.savepoint.ignore-unclaimed-state,false 2020-07-27 17:08:15,428 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:execution.attached, true 2020-07-27 17:08:15,428 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:internal.cluster.execution-mode,NORMAL 2020-07-27 17:08:15,428 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:execution.shutdown-on-attached-exit,false 2020-07-27 17:08:15,428 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:pipeline.jars,文件:/data/rt/jar_version/sql/test.jar 2020-07-27 17:08:15,428 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:parallelism.default, 3 2020-07-27 17:08:15,428 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:taskmanager.numberOfTaskSlots, 1 2020-07-27 17:08:15,428 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:pipeline.classpaths,http://xx32.138:38088/rt/udf/download?udfname=test 2020-07-27 17:08:15,428 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:yarn.application.name、RTC_TEST 2020-07-27 17:08:15,428 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:yarn.application.queue、root.dp.dp_online 2020-07-27 17:08:15,429 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:$internal.deployment.config-dir、/data/server/flink-1.11.1/conf 2020-07-27 17:08:15,429 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:$internal.yarn.log-config-file,/data/server/flink-1.11.1/conf /log4j.properties 2020-07-27 17:08:15,455 WARN org.apache.flink.configuration.Configuration [] - 配置使用不推荐使用的配置键“web.port”而不是正确的键“rest.bind-port” 2020-07-27 17:08:15,465 INFO org.apache.flink.runtime.clusterframework.BootstrapTools [] - 将临时文件的目录设置为:/data1/emr/yarn/local/usercache/wangty/appcache/application_156872447895905_39, data2/emr/yarn/local/usercache/wangty/appcache/application_1568724479991_18850539,/data3/emr/yarn/local/usercache/wangty/appcache/application_1568724479991_18850539 2020-07-27 17:08:15,471 信息 org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - 启动 YarnJobClusterEntrypoint。 2020-07-27 17:08:15,993 信息 org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - 安装默认文件系统。 2020-07-27 17:08:16,235 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - 安装安全上下文。 2020-07-27 17:08:16,715 INFO org.apache.flink.runtime.security.modules.HadoopModule [] - Hadoop 用户设置为 wangty (auth:SIMPLE) 2020-07-27 17:08:16,722 INFO org.apache.flink.runtime.security.modules.JaasModule [] - Jaas 文件将创建为 /data1/emr/yarn/local/usercache/wangty/appcache/application_15687244795905_3988 jaas-8303363038541870345.conf。 2020-07-27 17:08:16,729 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - 初始化集群服务。 2020-07-27 17:08:16,741 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils [] - 尝试启动 actor 系统,外部地址 xx5.60:0,绑定地址 0.0.0.0:0。 2020-07-27 17:08:18,830 信息 akka.event.slf4j.Slf4jLogger [] - Slf4jLogger 启动 2020-07-27 17:08:19,781 信息 akka.remote.Remoting [] - 开始远程处理 2020-07-27 17:08:19,936 信息 akka.remote.Remoting [] - 远程处理开始;监听地址:[akka.tcp://flink@xxx60:36696] 2020-07-27 17:08:20,021 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils [] - Actor 系统在 akka.tcp://flink@xxx60:36696 启动 2020-07-27 17:08:20,042 WARN org.apache.flink.configuration.Configuration [] - 配置使用不推荐使用的配置键“web.port”而不是正确的键“rest.port” 2020-07-27 17:08:20,049 INFO org.apache.flink.runtime.blob.BlobServer [] - 创建 BLOB 服务器存储目录/data3/emr/yarn/local/usercache/wangty/appcache/application_1568724479991_18859bdb-store -0f30-4688-9e68-b8e5866a93c7 2020-07-27 17:08:20,054 INFO org.apache.flink.runtime.blob.BlobServer [] - 在 0.0.0.0:56782 启动 BLOB 服务器 - 最大并发请求:50 - 最大积压:1000 2020-07-27 17:08:20,063 INFO org.apache.flink.runtime.metrics.MetricRegistryImpl [] - 未配置指标报告器,不会公开/报告任何指标。 2020-07-27 17:08:20,066 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils [] - 尝试启动 actor 系统,外部地址 xx5.60:0,绑定地址 0.0.0.0:0。 2020-07-27 17:08:20,082 信息 akka.event.slf4j.Slf4jLogger [] - Slf4jLogger 启动 2020-07-27 17:08:20,086 信息 akka.remote.Remoting [] - 开始远程处理 2020-07-27 17:08:20,093 信息 akka.remote.Remoting [] - 远程处理开始;监听地址:[akka.tcp://flink-metrics@xx5.60:60801] 2020-07-27 17:08:20,794 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils [] - Actor 系统在 akka.tcp://flink-metrics@xx5.60:60801 启动 2020-07-27 17:08:20,810 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService [] - 在 akka://flink 启动 org.apache.flink.runtime.metrics.dump.MetricQueryService 的 RPC 端点-metrics/user/rpc/MetricQueryService 。 2020-07-27 17:08:20,856 WARN org.apache.flink.configuration.Configuration [] - 配置使用不推荐使用的配置键“web.port”而不是正确的键“rest.bind-port” 2020-07-27 17:08:20,858 INFO org.apache.flink.runtime.jobmaster.MiniDispatcherRestEndpoint [] - 上传目录/tmp/flink-web-f3b225c5-e01d-4dfb-9091-aca7bb8e6192/load dolink不存在。 2020-07-27 17:08:20,859 INFO org.apache.flink.runtime.jobmaster.MiniDispatcherRestEndpoint [] - 创建目录 /tmp/flink-web-f3b225c5-e01d-4dfb-9091-aca7bb8e6192/fload文件上传。 2020-07-27 17:08:20,874 INFO org.apache.flink.runtime.jobmaster.MiniDispatcherRestEndpoint [] - 开始休息端点。 2020-07-27 17:08:21,103 INFO org.apache.flink.runtime.webmonitor.WebMonitorUtils [] - 确定了主要集群组件日志文件的位置:/data/emr/yarn/logs/application_1568724479991_1885051560909057209082072082082082087208208208201007-27 2020-07-27 17:08:21,103 INFO org.apache.flink.runtime.webmonitor.WebMonitorUtils [] - 确定了主要集群组件标准输出文件的位置:/data/emr/yarn/logs/application_1568724479991_1885010509095050909910509970509070505030705087 2020-07-27 17:08:21,241 INFO org.apache.flink.runtime.jobmaster.MiniDispatcherRestEndpoint [] - 在 xx5.60:46723 监听的休息端点 2020-07-27 17:08:21,242 INFO org.apache.flink.runtime.jobmaster.MiniDispatcherRestEndpoint [] - http://xx5.60:46723 被授予领导权,leaderSessionID=00000000-0000-0000-000000000000 2020-07-27 17:08:21,243 INFO org.apache.flink.runtime.jobmaster.MiniDispatcherRestEndpoint [] - Web 前端监听 http://xx5.60:46723。 2020-07-27 17:08:21,256 INFO org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils [] - 派生自分数 jvm 开销内存(172.800mb(181193935 字节))小于其最小值192.000mb(201326592 字节),将使用最小值代替 2020-07-27 17:08:21,304 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService [] - 在 akka://flink/user/rpc 启动 org.apache.flink.yarn.YarnResourceManager 的 RPC 端点/资源管理器_0。 2020-07-27 17:08:21,314 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:taskmanager.memory.process.size, 1728m 2020-07-27 17:08:21,314 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:internal.jobgraph-path、job.graph 2020-07-27 17:08:21,314 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:jobmanager.execution.failover-strategy,区域 2020-07-27 17:08:21,314 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:high-availability.cluster-id,application_1568724479991_18850539 2020-07-27 17:08:21,314 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:jobmanager.rpc.address,本地主机 2020-07-27 17:08:21,314 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:execution.target,yarn-per-job 2020-07-27 17:08:21,314 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:jobmanager.memory.process.size, 1 gb 2020-07-27 17:08:21,314 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:jobmanager.rpc.port, 6123 2020-07-27 17:08:21,314 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:execution.savepoint.ignore-unclaimed-state,false 2020-07-27 17:08:21,314 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:execution.attached, true 2020-07-27 17:08:21,314 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:internal.cluster.execution-mode,NORMAL 2020-07-27 17:08:21,315 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:execution.shutdown-on-attached-exit,false 2020-07-27 17:08:21,315 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:pipeline.jars,文件:/data/rt/jar_version/sql/test.jar 2020-07-27 17:08:21,315 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:parallelism.default, 3 2020-07-27 17:08:21,315 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:taskmanager.numberOfTaskSlots, 1 2020-07-27 17:08:21,315 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:pipeline.classpaths,http://xx32.138:38088/rt/udf/download?udfname=test 2020-07-27 17:08:21,315 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:yarn.application.name, RTC_TEST 2020-07-27 17:08:21,315 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:yarn.application.queue、root.dp.dp_online 2020-07-27 17:08:21,315 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:$internal.deployment.config-dir、/data/server/flink-1.11.1/conf 2020-07-27 17:08:21,315 INFO org.apache.flink.configuration.GlobalConfiguration [] - 加载配置属性:$internal.yarn.log-config-file,/data/server/flink-1.11.1/conf /log4j.properties 2020-07-27 17:08:21,333 INFO org.apache.flink.runtime.externalresource.ExternalResourceUtils [] - 启用外部资源:[] 2020-07-27 17:08:21,334 INFO org.apache.flink.yarn.YarnResourceManager [] - 无法获取调度程序资源类型:此 YARN 版本不支持“getSchedulerResourceTypes” 2020-07-27 17:08:21,375 信息 org.apache.flink.runtime.dispatcher.runner.JobDispatcherLeaderProcess [] - 启动 JobDispatcherLeaderProcess。 2020-07-27 17:08:21,379 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService [] - 在 akka://flink/user 启动 org.apache.flink.runtime.dispatcher.MiniDispatcher 的 RPC 端点/rpc/dispatcher_1 。 2020-07-27 17:08:21,408 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService [] - 在 akka://flink/user 处启动 org.apache.flink.runtime.jobmaster.JobMaster 的 RPC 端点/rpc/jobmanager_2 。 2020-07-27 17:08:21,414 INFO org.apache.flink.runtime.jobmaster.JobMaster [] - 初始化作业 RTC_TEST (9f074e66a0f70274c7a7af42e71525fb)。 2020-07-27 17:08:21,437 INFO org.apache.flink.runtime.jobmaster.JobMaster [] - 使用重启后退时间策略 FixedDelayRestartBackoffTimeStrategy(maxNumberRestartAttempts=5, backoffTimeMS=10000) 用于 RTC_TEST (9f074b7f26e2000)。 2020-07-27 17:08:21,472 INFO org.apache.flink.runtime.jobmaster.JobMaster [] - 在 master 上为作业 RTC_TEST (9f074e66a0f70274c7a7af42e71525fb) 运行初始化。 2020-07-27 17:08:21,472 INFO org.apache.flink.runtime.jobmaster.JobMaster [] - 在 0 毫秒内成功在主服务器上运行初始化。 2020-07-27 17:08:21,488 INFO org.apache.flink.runtime.scheduler.adapter.DefaultExecutionTopology [] - 在 1 毫秒内构建了 3 个流水线区域 2020-07-27 17:08:21,542 INFO org.apache.flink.runtime.jobmaster.JobMaster [] - 使用应用程序定义的状态后端:RocksDBStateBackend{checkpointStreamBackend=文件状态后端(检查点:'hdfs://HDFS00000/data /checkpoint-data/wangty/RTC_TEST', savepoints: 'null', 异步: UNDEFINED, fileStateThreshold: -1), localRocksDbDirectories=null, enableIncrementalCheckpointing=FALSE, numberOfTransferThreads=-1, writeBatchSize=-1} 2020-07-27 17:08:21,543 INFO org.apache.flink.runtime.jobmaster.JobMaster [] - 使用作业/集群配置配置应用程序定义的状态后端 2020-07-27 17:08:21,568 INFO org.apache.flink.contrib.streaming.state.RocksDBStateBackend [] - 使用预定义选项:DEFAULT。 2020-07-27 17:08:21,569 INFO org.apache.flink.contrib.streaming.state.RocksDBStateBackend [] - 使用默认选项工厂:DefaultConfigurableOptionsFactory{configuredOptions={}}。 2020-07-27 17:08:21,714 INFO org.apache.flink.yarn.YarnResourceManager [] - 从之前的尝试 ([]) 中恢复了 0 个容器。 2020-07-27 17:08:21,716 INFO org.apache.flink.yarn.YarnResourceManager [] - 注册应用程序主响应不包含调度程序资源类型,使用“$internal.yarn.resourcemanager.enable-vcore-matching”。 2020-07-27 17:08:21,716 INFO org.apache.flink.yarn.YarnResourceManager [] - 容器匹配策略:IGNORE_VCORE。 2020-07-27 17:08:21,719 INFO org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl [] - 线程池大小的上限是 500 2020-07-27 17:08:21,720 信息 org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy [] - yarn.client.max-nodemanagers-proxys:500 2020-07-27 17:08:21,723 INFO org.apache.flink.yarn.YarnResourceManager [] - ResourceManager akka.tcp://flink@xx5.60:36696/user/rpc/resourcemanager_0 获得了 0000000000000000000000000000000000000000000000000000000 2020-07-27 17:08:21,727 INFO org.apache.flink.runtime.resourcemanager.slotmanager.SlotManagerImpl [] - 启动 SlotManager。 2020-07-27 17:08:22,126 INFO org.apache.flink.runtime.jobmaster.JobMaster [] - 使用故障转移策略 org.apache.flink.runtime.executiongraph.failover.flip1.RestartPipelinedRegionFailoverStrategy@c1dab34(RTC707045c7f7f7f5c75c7f5c1dab34 为 RTC7045c7f5c7fTEST . 2020-07-27 17:08:22,130 INFO org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl [] - JobManager runner for job RTC_TEST (9f074e66a0f70274c7a7af42e71525fb) 在 00000000000000000000000000000000000000000000000000000000000000000000000000 .tcp://flink@xx5.60:36696/user/rpc/jobmanager_2。 2020-07-27 17:08:22,133 INFO org.apache.flink.runtime.jobmaster.JobMaster [] - 开始执行作业 RTC_TEST (9f074e66a0f70274c7a7af42e71525fb) 下作业主 ID 0000000000000000000000000000000000000000 2020-07-27 17:08:22,135 INFO org.apache.flink.runtime.jobmaster.JobMaster [] - 使用调度策略开始调度 [org.apache.flink.runtime.scheduler.strategy.EagerSchedulingStrategy] 2020-07-27 17:08:22,135 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - 作业 RTC_TEST (9f074e66a0f70274c7a7af42e71525fb) 从状态 CREATED 切换到 RUNNINGING。 2020-07-27 17:08:22,145 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - 来源:rtsc_test -> Filter -> Map -> SourceConversion(table=[default_catalog.default_database.test], fields= [a, b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp, PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 -> Filter -> Sink: sink kafka topic: rtsc_test2 (1/3 ) (8bb9f7b4bcc93895851ec47123d2213a) 从 CREATED 切换到 SCHEDULED。 2020-07-27 17:08:22,145 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - 来源:rtsc_test -> Filter -> Map -> SourceConversion(table=[default_catalog.default_database.test], fields= [a, b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp, PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 -> Filter -> Sink: sink kafka topic: rtsc_test2 (2/3 ) (647da02fb921931e1a35ba4265d95c04) 从 CREATED 切换到 SCHEDULED。 2020-07-27 17:08:22,145 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - 来源:rtsc_test -> Filter -> Map -> SourceConversion(table=[default_catalog.default_database.test], fields= [a, b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp, PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 -> Filter -> Sink: sink kafka topic: rtsc_test2 (3/3 ) (78211c4a866e216b6c821b743b2bf52d) 从 CREATED 切换到 SCHEDULED。 2020-07-27 17:08:22,158 INFO org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl [] - 无法提供插槽请求,没有连接 ResourceManager。添加为待处理请求 [SlotRequestId{de40e772bd7366814b7ed234f5cdfc53}] 2020-07-27 17:08:22,162 INFO org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl [] - 无法提供槽请求,没有连接 ResourceManager。添加为待处理请求 [SlotRequestId{3ca5a64e992d87a27f207e6020eea047}] 2020-07-27 17:08:22,162 INFO org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl [] - 无法提供槽请求,没有连接 ResourceManager。添加为待处理请求 [SlotRequestId{4945a0b0f9dcfe7547cfefab3ee59be7}] 2020-07-27 17:08:22,166 INFO org.apache.flink.runtime.jobmaster.JobMaster [] - 连接到 ResourceManager akka.tcp://flink@xx5.60:36696/user/rpc/resourcemanager_(0000000000000000000000 ) 2020-07-27 17:08:22,170 INFO org.apache.flink.runtime.jobmaster.JobMaster [] - 解析 ResourceManager 地址,开始注册 2020年7月27日17:08:22173 INFO org.apache.flink.yarn.YarnResourceManager [] - 注册职业经理00000000000000000000000000000000@akka.tcp://flink@xx5.60:36696 /用户/ RPC / jobmanager_2作业9f074e66a0f70274c7a7af42e71525fb . 2020年7月27日17:08:22177 INFO org.apache.flink.yarn.YarnResourceManager [] - 注册的作业管理00000000000000000000000000000000@akka.tcp://flink@xx5.60:36696 /用户/ RPC / jobmanager_2作业9f074e66a0f70274c7a7af42e71525fb . 2020-07-27 17:08:22,180 INFO org.apache.flink.runtime.jobmaster.JobMaster [] - JobManager 在 ResourceManager 成功注册,leader id:00000000000000000000000000000000。 2020-07-27 17:08:22,180 INFO org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl [] - 请求新的插槽 [SlotRequestId{de40e772bd7366814b7ed234f5cdfc53}] 和资源管理器{UNKNOW.Resource} 2020-07-27 17:08:22,181 INFO org.apache.flink.yarn.YarnResourceManager [] - 为作业 9f074e66a0f70274c7a7af42e71525fb 和分配 ResourceProfile{UNKNOWN} 请求槽位,分配为 9f074e66a7a7af42e71525fb 和分配 2020-07-27 17:08:22,181 INFO org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl [] - 请求新的插槽 [SlotRequestId{3ca5a64e992d87a27f207e6020eea047}] 和 UNK 2020-07-27 17:08:22,182 INFO org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl [] - 请求新插槽 [SlotRequestId{4945a0b0f9dcfe7547cfefab3ee59be7}] 和配置文件 NOWKNProfile{UNKNProfile} 2020-07-27 17:08:22,190 INFO org.apache.flink.yarn.YarnResourceManager [] - 请求新的 TaskExecutor 容器,其资源为 WorkerResourceSpec {cpuCores=1.0, taskHeapSize=384.000mb (402653174 bytes, taskOffHeapSizeem), taskOffHeapSize=M =128.000mb(134217730 字节),managedMemSize=512.000mb(536870920 字节)}。此资源的待处理工作线程数为 1。 2020-07-27 17:08:22,192 INFO org.apache.flink.yarn.YarnResourceManager [] - 为作业 9f074e66a0f70274c7a7af42e71525fb 分配了配置文件 ResourceProfile{UNKNOWN} 的请求槽,分配了 f96df30303009000000000000000 2020-07-27 17:08:22,192 INFO org.apache.flink.yarn.YarnResourceManager [] - 请求新的 TaskExecutor 容器,其资源为 WorkerResourceSpec {cpuCores=1.0, taskHeapSize=384.000mb (402653174 bytes), taskOffHeapSize =M bytes, taskOffHeapSize =128.000mb(134217730 字节),managedMemSize=512.000mb(536870920 字节)}。此资源的待处理工作线程数为 2。 2020-07-27 17:08:22,193 INFO org.apache.flink.yarn.YarnR来自志愿者整理的flink邮件归档



参考答案:

建议确认一下 Yarn 的配置 “yarn.scheduler.minimum-allocation-mb” 在 Yarn RM 和 Flink JM 这台机器上是否合适。

Yarn TM 对container request做归一化。例如你请求的container是1728m (taskmanager.memory.process.size) ,如果 minimum-allocation-mb 是 1024m,那么实际得到的 容器大小必须是 minimum-allocation-mb 的一次性倍数,也就是 2048m。Flink 会去获取 Yarn 的配置,计算 容器请求实际分到的容器应该大了,可以分到的容器进行检查。现在看JM日志,分下来的 集装箱并没有通过这个检查可能,导致Flink认为集装箱规格不匹配。这里最的原因是Flink拿到的 minimum-allocation-mb 和 Yarn RM 实际使用的属性。 这个是 hadoop 2.x 的已知设计缺陷。

hadoop 2.x 中,容器请求没有唯一的标识,且连接下来的容器 的资源和请求的资源也可能不同,为了将分下来的容器对应到之前的请求,flink 不得不去进行归一化的计算。如果yarn Hadoop 3.x 中改进了问题,每个容器请求都有一个 id,可以将它分解到容器中,然后再请求。 仍然采用计算资源的方式匹配容器。

flink 1.9 中没有遇到遇到的问题,是因为默认所有容器都是相同规格的,所以不需要了匹配过程。 社区开发支持申请不同规格的容器正在调度能力,因此在1.11种增加了验证容器资源的逻辑。*来自志愿者整理的flink邮件归档



关于本问题的更多回答可点击进行查看:

https://developer.aliyun.com/ask/371491?spm=a2c6h.13066369.question.32.6ad26382IbvmwX

相关实践学习
基于Hologres轻松玩转一站式实时仓库
本场景介绍如何利用阿里云MaxCompute、实时计算Flink和交互式分析服务Hologres开发离线、实时数据融合分析的数据大屏应用。
Linux入门到精通
本套课程是从入门开始的Linux学习课程,适合初学者阅读。由浅入深案例丰富,通俗易懂。主要涉及基础的系统操作以及工作中常用的各种服务软件的应用、部署和优化。即使是零基础的学员,只要能够坚持把所有章节都学完,也一定会受益匪浅。
相关文章
|
1月前
|
消息中间件 关系型数据库 Kafka
flink cdc 数据问题之数据丢失如何解决
Flink CDC(Change Data Capture)是一个基于Apache Flink的实时数据变更捕获库,用于实现数据库的实时同步和变更流的处理;在本汇总中,我们组织了关于Flink CDC产品在实践中用户经常提出的问题及其解答,目的是辅助用户更好地理解和应用这一技术,优化实时数据处理流程。
|
28天前
|
API 数据库 流计算
有大佬知道在使用flink cdc实现数据同步,如何实现如果服务停止了对数据源表的某个数据进行删除操作,重启服务之后目标表能进行对源表删除的数据进行删除吗?
【2月更文挑战第27天】有大佬知道在使用flink cdc实现数据同步,如何实现如果服务停止了对数据源表的某个数据进行删除操作,重启服务之后目标表能进行对源表删除的数据进行删除吗?
44 3
|
29天前
|
Oracle 关系型数据库 MySQL
Flink CDC产品常见问题之flink Oraclecdc 捕获19C数据时报错错如何解决
Flink CDC(Change Data Capture)是一个基于Apache Flink的实时数据变更捕获库,用于实现数据库的实时同步和变更流的处理;在本汇总中,我们组织了关于Flink CDC产品在实践中用户经常提出的问题及其解答,目的是辅助用户更好地理解和应用这一技术,优化实时数据处理流程。
|
29天前
|
分布式计算 Hadoop Java
Flink CDC产品常见问题之tidb cdc 数据量大了就疯狂报空指针如何解决
Flink CDC(Change Data Capture)是一个基于Apache Flink的实时数据变更捕获库,用于实现数据库的实时同步和变更流的处理;在本汇总中,我们组织了关于Flink CDC产品在实践中用户经常提出的问题及其解答,目的是辅助用户更好地理解和应用这一技术,优化实时数据处理流程。
|
29天前
|
资源调度 关系型数据库 测试技术
Flink CDC产品常见问题之没有报错但是一直监听不到数据如何解决
Flink CDC(Change Data Capture)是一个基于Apache Flink的实时数据变更捕获库,用于实现数据库的实时同步和变更流的处理;在本汇总中,我们组织了关于Flink CDC产品在实践中用户经常提出的问题及其解答,目的是辅助用户更好地理解和应用这一技术,优化实时数据处理流程。
|
2月前
|
消息中间件 安全 Kafka
2024年了,如何更好的搭建Kafka集群?
我们基于Kraft模式和Docker Compose同时采用最新版Kafka v3.6.1来搭建集群。
382 2
2024年了,如何更好的搭建Kafka集群?
|
3月前
|
消息中间件 存储 数据可视化
kafka高可用集群搭建
kafka高可用集群搭建
41 0
|
6月前
|
消息中间件 存储 Kubernetes
Helm方式部署 zookeeper+kafka 集群 ——2023.05
Helm方式部署 zookeeper+kafka 集群 ——2023.05
225 0
|
3月前
|
消息中间件 Kafka Linux
Apache Kafka-初体验Kafka(03)-Centos7下搭建kafka集群
Apache Kafka-初体验Kafka(03)-Centos7下搭建kafka集群
64 0
|
3月前
|
消息中间件 数据可视化 关系型数据库
ELK7.x日志系统搭建 4. 结合kafka集群完成日志系统
ELK7.x日志系统搭建 4. 结合kafka集群完成日志系统
76 0

相关产品

  • 实时计算 Flink版