Flink报错问题之yarn报错如何解决

本文涉及的产品
实时计算 Flink 版,5000CU*H 3个月
简介: Apache Flink是由Apache软件基金会开发的开源流处理框架,其核心是用Java和Scala编写的分布式流数据流引擎。本合集提供有关Apache Flink相关技术、使用技巧和最佳实践的资源。

问题一:Flink SQL读取复杂JSON格式

Hello,各位大佬: 请教下大佬们,在Flink SQL中读取Kafka中的数据,但Kafka中的数据比较复杂,其中json的data属性中的值是数组,但数组的内容是动态的,没有统一,如果定义create table时候schema呢?我定义了array ,想自己定义UDF来处理的,在JsonNode中的asText无法取到数据。 请问各位大佬有啥高招呢?谢谢。

kafka消息样例(data的value是动态的): {"source":"transaction_2020202020200","data":[{"name":"d1111"},{"age":18}]} 我定义的schema: create table kafka_message( source string, data array )with...*来自志愿者整理的flink邮件归档



参考答案:

http://apache-flink.147419.n8.nabble.com/FlinkSQL-JsonObject-td9166.html#a9259 这个邮件列表有相似的问题,你看下有没有帮助。 PS:1.12 即将发布,也支持了 Raw 类型[1],也可以使用这个类型,然后代码自己 UDF 再处理。使用 Raw 类型也有个好处是,Source 消费不会因为 format 解析慢导致任务的瓶颈在拉数据慢,因为往往 Source 的并发度最大也只能是中间件的分区数,比如 Kakfa。 [1] https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connectors/formats/raw.html *来自志愿者整理的flink邮件归档


关于本问题的更多回答可点击进行查看:

https://developer.aliyun.com/ask/370062?spm=a2c6h.13066369.question.26.33bf585fG1Su25



问题二:Flink 1.11.2 on yarn报错如何处理?

【环境】 Flink 版本:1.11.2 Hadoop 版本 :2.6.0-cdh5.8.3 Java 版本: 1.8.0_144

【命令】 [jacob@localhost flink-1.11.2]$ ./bin/yarn-session.sh -jm 1024m -tm 2048m 【现象】 .... 2020-12-08 18:06:00,134 ERROR org.apache.flink.yarn.cli.FlinkYarnSessionCli [] - Error while running the Flink session. org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't deploy Yarn session cluster at org.apache.flink.yarn.YarnClusterDescriptor.deploySessionCluster(YarnClusterDescriptor.java:382) ~[flink-dist_2.11-1.11.2.jar:1.11.2] at org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:514) ~[flink-dist_2.11-1.11.2.jar:1.11.2] at org.apache.flink.yarn.cli.FlinkYarnSessionCli.lambda$main$4(FlinkYarnSessionCli.java:751) ~[flink-dist_2.11-1.11.2.jar:1.11.2] at java.security.AccessController.doPrivileged(Native Method) ~[?:?] at javax.security.auth.Subject.doAs(Subject.java:423) ~[?:?] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709) ~[hadoop-common-2.6.0-cdh5.8.3.jar:?] at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) ~[flink-dist_2.11-1.11.2.jar:1.11.2] at org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:751) [flink-dist_2.11-1.11.2.jar:1.11.2] Caused by: org.apache.flink.yarn.YarnClusterDescriptor$YarnDeploymentException: The YARN application unexpectedly switched to state FAILED during deployment. Diagnostics from YARN: Application application_1603495749855_54023 failed 1 times due to AM Container for appattempt_1603495749855_54023_000001 exited with exitCode: 1 For more detailed output, check application tracking page:http://*******:8088/proxy/application_1603495749855_54023/Then, click on links to logs of each attempt. Diagnostics: Exception from container-launch. Container id: container_1603495749855_54023_01_000001 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:601) at org.apache.hadoop.util.Shell.run(Shell.java:504) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:786) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:213) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)

Container exited with a non-zero exit code 1 Failing this attempt. Failing the application. If log aggregation is enabled on your cluster, use this command to further investigate the issue: yarn logs -applicationId application_1603495749855_54023 at org.apache.flink.yarn.YarnClusterDescriptor.startAppMaster(YarnClusterDescriptor.java:1021) ~[flink-dist_2.11-1.11.2.jar:1.11.2] at org.apache.flink.yarn.YarnClusterDescriptor.deployInternal(YarnClusterDescriptor.java:524) ~[flink-dist_2.11-1.11.2.jar:1.11.2] at org.apache.flink.yarn.YarnClusterDescriptor.deploySessionCluster(YarnClusterDescriptor.java:375) ~[flink-dist_2.11-1.11.2.jar:1.11.2] ... 7 more


The program finished with the following exception:

org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't deploy Yarn session cluster at org.apache.flink.yarn.YarnClusterDescriptor.deploySessionCluster(YarnClusterDescriptor.java:382) at org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:514) at org.apache.flink.yarn.cli.FlinkYarnSessionCli.lambda$main$4(FlinkYarnSessionCli.java:751) at java.base/java.security.AccessController.doPrivileged(Native Method) at java.base/javax.security.auth.Subject.doAs(Subject.java:423) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709) at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) at org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:751) Caused by: org.apache.flink.yarn.YarnClusterDescriptor$YarnDeploymentException: The YARN application unexpectedly switched to state FAILED during deployment. Diagnostics from YARN: Application application_1603495749855_54023 failed 1 times due to AM Container for appattempt_1603495749855_54023_000001 exited with exitCode: 1 For more detailed output, check application tracking page:http://*******:8088/proxy/application_1603495749855_54023/Then, click on links to logs of each attempt. Diagnostics: Exception from container-launch. Container id: container_1603495749855_54023_01_000001 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:601) at org.apache.hadoop.util.Shell.run(Shell.java:504) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:786) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:213) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)

Container exited with a non-zero exit code 1 Failing this attempt. Failing the application. If log aggregation is enabled on your cluster, use this command to further investigate the issue: yarn logs -applicationId application_1603495749855_54023 at org.apache.flink.yarn.YarnClusterDescriptor.startAppMaster(YarnClusterDescriptor.java:1021) at org.apache.flink.yarn.YarnClusterDescriptor.deployInternal(YarnClusterDescriptor.java:524) at org.apache.flink.yarn.YarnClusterDescriptor.deploySessionCluster(YarnClusterDescriptor.java:375) ... 7 more 2020-12-08 18:06:00,171 INFO org.apache.flink.yarn.YarnClusterDescriptor [] - Cancelling deployment from Deployment Failure Hook

........................

【具体log】 嵌入yarn logs -applicationId application_1603495749855_54023 查询log 如下: Container: container_1603495749855_54018_01_000001 on ******.mercury.corp_8041

LogType:jobmanager.err Log Upload Time:Tue Dec 08 17:49:33 -0800 2020 LogLength:160 Log Contents: Unrecognized VM option 'MaxMetaspaceSize=268435456' Error: Could not create the Java Virtual Machine. Error: A fatal exception has occurred. Program will exit.

LogType:jobmanager.out Log Upload Time:Tue Dec 08 17:49:33 -0800 2020 LogLength:0 Log Contents:


【疑惑】 根据log,好像是说java版本不对,Unrecognized VM option 'MaxMetaspaceSize=268435456' 该参数只在1.8以上存在,但我的java就是1.8+的。不知道为什么不能启动。 相同的命令,在1.7.2 flink客户端是可以成功启动 【备注】 flink1.7.2同时在使用中,并连接Hadoop在运行flink job 不知道和这个有关系没。

谢谢!

*来自志愿者整理的flink邮件归档



参考答案:

该问题已经fix,确实是java版本问题!*来自志愿者整理的flink邮件归档



关于本问题的更多回答可点击进行查看:

https://developer.aliyun.com/ask/370063?spm=a2c6h.13066369.question.27.33bf585fSNTKzv



问题三:Flink HA目录下数据不完整,导致JobManager启动失败如何解决?

看日志,JobManager启动后有恢复任务,然后进程失败。 日志如下: 14:55:55.304 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint -

14:55:55.305 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Preconfiguration: 14:55:55.305 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint -

JM_RESOURCE_PARAMS extraction logs: jvm_params: -Xmx9126805504 -Xms9126805504 -XX:MaxMetaspaceSize=536870912 logs: INFO [] - Loading configuration property: taskmanager.numberOfTaskSlots, 30 INFO [] - Loading configuration property: cluster.evenly-spread-out-slots, true INFO [] - Loading configuration property: parallelism.default, 1 INFO [] - Loading configuration property: jobmanager.memory.process.size, 10gb INFO [] - Loading configuration property: jobmanager.memory.jvm-metaspace.size, 512mb INFO [] - Loading configuration property: jobmanager.memory.jvm-overhead.fraction, 0.1 INFO [] - Loading configuration property: jobmanager.memory.jvm-overhead.min, 192mb INFO [] - Loading configuration property: jobmanager.memory.jvm-overhead.max, 512mb INFO [] - Loading configuration property: jobmanager.memory.off-heap.size, 512mb INFO [] - Loading configuration property: taskmanager.memory.process.size, 80gb INFO [] - Loading configuration property: taskmanager.memory.jvm-metaspace.size, 1gb INFO [] - Loading configuration property: taskmanager.memory.jvm-overhead.fraction, 0.1 INFO [] - Loading configuration property: taskmanager.memory.jvm-overhead.min, 192mb INFO [] - Loading configuration property: taskmanager.memory.jvm-overhead.max, 1gb INFO [] - Loading configuration property: taskmanager.memory.segment-size, 128kb INFO [] - Loading configuration property: taskmanager.memory.managed.fraction, 0.4 INFO [] - Loading configuration property: taskmanager.memory.managed.size, 1gb INFO [] - Loading configuration property: taskmanager.memory.network.fraction, 0.1 INFO [] - Loading configuration property: taskmanager.memory.network.min, 1gb INFO [] - Loading configuration property: taskmanager.memory.network.max, 8gb INFO [] - Loading configuration property: taskmanager.memory.framework.off-heap.size, 1gb INFO [] - Loading configuration property: taskmanager.memory.task.off-heap.size, 8gb INFO [] - Loading configuration property: taskmanager.memory.framework.heap.size, 1gb INFO [] - Loading configuration property: high-availability, zookeeper INFO [] - Loading configuration property: high-availability.storageDir, bos://flink-bucket/flink/ha INFO [] - Loading configuration property: high-availability.zookeeper.quorum, bjhw-aisecurity-cassandra01.bjhw:9681,bjhw-aisecurity-cassandra02.bjhw:9681,bjhw-aisecurity-cassandra03.bjhw:9681,bjhw-aisecurity-cassandra04.bjhw:9681,bjhw-aisecurity-cassandra05.bjhw:9681 INFO [] - Loading configuration property: high-availability.zookeeper.path.root, /flink INFO [] - Loading configuration property: high-availability.cluster-id, opera_upd_FlinkxxxLogJob1 INFO [] - Loading configuration property: web.checkpoints.history, 100 INFO [] - Loading configuration property: state.checkpoints.num-retained, 100 INFO [] - Loading configuration property: state.checkpoints.dir, bos://flink-bucket/flink/default-checkpoints INFO [] - Loading configuration property: state.savepoints.dir, bos://flink-bucket/flink/default-savepoints INFO [] - Loading configuration property: jobmanager.execution.failover-strategy, region INFO [] - Loading configuration property: web.submit.enable, false INFO [] - Loading configuration property: jobmanager.archive.fs.dir, bos://flink-bucket/flink/completed-jobs/opera_upd_FlinkxxxLogJob1 INFO [] - Loading configuration property: historyserver.archive.fs.dir, bos://flink-bucket/flink/completed-jobs/opera_upd_FlinkxxxLogJob1 INFO [] - Loading configuration property: historyserver.archive.fs.refresh-interval, 10000 INFO [] - Loading configuration property: rest.port, 8600 INFO [] - Loading configuration property: historyserver.web.port, 8700 INFO [] - Loading configuration property: high-availability.jobmanager.port, 2000 INFO [] - Loading configuration property: blob.server.port, 2002 INFO [] - Loading configuration property: taskmanager.rpc.port, 2001 INFO [] - Loading configuration property: taskmanager.data.port, 2007 INFO [] - Loading configuration property: metrics.internal.query-service.port, 2003,2004 INFO [] - Loading configuration property: akka.ask.timeout, 60s INFO [] - Loading configuration property: taskmanager.network.request-backoff.max, 60000 INFO [] - Loading configuration property: env.java.home, /home/work/antibotFlink/java8 INFO [] - Loading configuration property: env.pid.dir, /home/work/antibotFlink/flink-1.11.2 INFO [] - Loading configuration property: io.tmp.dirs, /home/work/antibotFlink/flink-1.11.2/tmp INFO [] - Loading configuration property: web.tmpdir, /home/work/antibotFlink/flink-1.11.2/tmp INFO [] - The derived from fraction jvm overhead memory (1.000gb (1073741840 bytes)) is greater than its max value 512.000mb (536870912 bytes), max value will be used instead INFO [] - Final Master Memory configuration: INFO [] - Total Process Memory: 10.000gb (10737418240 bytes) INFO [] - Total Flink Memory: 9.000gb (9663676416 bytes) INFO [] - JVM Heap: 8.500gb (9126805504 bytes) INFO [] - Off-heap: 512.000mb (536870912 bytes) INFO [] - JVM Metaspace: 512.000mb (536870912 bytes) INFO [] - JVM Overhead: 512.000mb (536870912 bytes)

14:55:55.305 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint -

14:55:55.305 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Starting StandaloneSessionClusterEntrypoint (Version: 1.11.2, Scala: 2.11, Rev:fe36135, Date:2020-09-09T16:19:03+02:00) 14:55:55.305 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - OS current user: work 14:55:55.528 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Current Hadoop/Kerberos user: work 14:55:55.529 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - JVM: Java HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.251-b08 14:55:55.529 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Maximum heap size: 8413 MiBytes 14:55:55.529 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - JAVA_HOME: /home/work/antibotFlink/java8 14:55:55.531 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Hadoop version: 2.7.5 14:55:55.531 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - JVM Options: 14:55:55.531 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Xmx9126805504 14:55:55.531 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Xms9126805504 14:55:55.531 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -XX:MaxMetaspaceSize=536870912 14:55:55.531 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Dlog.file=/home/work/antibotFlink/flink-1.11.2/log/flink-work-standalonesession-0-m1-sys-rpm064-8af7a.m1.xxx.com.log 14:55:55.531 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Dlog4j.configuration=file:/home/work/antibotFlink/flink-1.11.2/conf/log4j.properties 14:55:55.531 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Dlog4j.configurationFile=file:/home/work/antibotFlink/flink-1.11.2/conf/log4j.properties 14:55:55.531 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Dlogback.configurationFile=file:/home/work/antibotFlink/flink-1.11.2/conf/logback.xml 14:55:55.531 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Program Arguments: 14:55:55.531 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --configDir 14:55:55.531 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - /home/work/antibotFlink/flink-1.11.2/conf 14:55:55.531 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --executionMode 14:55:55.531 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - cluster 14:55:55.531 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --host 14:55:55.531 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - m1-sys-rpm064-8af7a.m1.xxx.com 14:55:55.531 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Classpath: /home/work/antibotFlink/flink-1.11.2/lib/flink-csv-1.11.2.jar:/home/work/antibotFlink/flink-1.11.2/lib/flink-json-1.11.2.jar:/home/work/antibotFlink/flink-1.11.2/lib/flink-shaded-zookeeper-3.4.14.jar:/home/work/antibotFlink/flink-1.11.2/lib/flink-table_2.11-1.11.2.jar:/home/work/antibotFlink/flink-1.11.2/lib/flink-table-blink_2.11-1.11.2.jar:/home/work/antibotFlink/flink-1.11.2/lib/logback-classic-1.1.11.jar:/home/work/antibotFlink/flink-1.11.2/lib/logback-core-1.1.11.jar:/home/work/antibotFlink/flink-1.11.2/lib/flink-dist_2.11-1.11.2.jar:/home/work/antibotFlink/hadoop-client-2.7.5/etc/hadoop:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/commons-collections-3.2.2.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/commons-lang-2.6.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/paranamer-2.3.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/jackson-annotations-2.10.0.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/commons-cli-1.2.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/xz-1.0.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/hadoop-auth-2.7.5.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/httpcore-4.4.10.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/curator-framework-2.7.1.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/jackson-core-2.10.0.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/gson-2.2.4.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/xmlenc-0.52.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/zookeeper-3.4.6.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/hadoop-annotations-2.7.5.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/commons-codec-1.4.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/joda-time-2.10.1.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/jersey-core-1.9.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/jersey-server-1.9.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/jsch-0.1.54.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/stax-api-1.0-2.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/servlet-api-2.5.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/jsr305-3.0.0.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/asm-3.2.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/bce-java-sdk-0.10.82.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/jackson-databind-2.10.0.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/junit-4.11.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/jersey-json-1.9.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/avro-1.7.4.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/commons-io-2.4.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/commons-digester-1.8.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/jsp-api-2.1.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/curator-client-2.7.1.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/log4j-1.2.17.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/guava-11.0.2.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/jettison-1.1.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/httpclient-4.5.6.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/jetty-6.1.26.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/activation-1.1.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/commons-net-3.1.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/jets3t-0.9.0.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/hamcrest-core-1.3.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/hadoop-nfs-2.7.5.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/hadoop-common-2.7.5-tests.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/common/hadoop-common-2.7.5.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/hdfs:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/hdfs/lib/asm-3.2.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/hdfs/lib/commons-io-2.4.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/hdfs/bos-hdfs-sdk-1.0.1-SNAPSHOT-0.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/hdfs/libdfs-java-2.0.5-support-community.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.5.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/hdfs/hadoop-hdfs-2.7.5.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/hdfs/hadoop-hdfs-2.7.5-tests.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/commons-lang-2.6.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/commons-cli-1.2.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/xz-1.0.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/guice-3.0.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/jersey-client-1.9.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/commons-codec-1.4.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/servlet-api-2.5.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/asm-3.2.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/jersey-json-1.9.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/commons-io-2.4.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/guava-11.0.2.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/javax.inject-1.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/jettison-1.1.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/jetty-6.1.26.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/activation-1.1.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/hadoop-yarn-client-2.7.5.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/hadoop-yarn-api-2.7.5.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.5.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.5.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/hadoop-yarn-registry-2.7.5.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.5.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.5.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.5.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/hadoop-yarn-common-2.7.5.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.5.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.5.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.5.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/yarn/hadoop-yarn-server-common-2.7.5.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.5.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/lib/junit-4.11.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.5.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.5.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.5.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.5.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.5.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.5.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.5-tests.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.5.jar:/home/work/antibotFlink/hadoop-client-2.7.5/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.5.jar:/contrib/capacity-scheduler/*.jar:: 14:55:55.531 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint -

14:55:55.532 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Registered UNIX signal handlers for [TERM, HUP, INT] 14:55:55.539 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.numberOfTaskSlots, 30 14:55:55.539 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: cluster.evenly-spread-out-slots, true 14:55:55.540 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: parallelism.default, 1 14:55:55.540 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.memory.process.size, 10gb 14:55:55.540 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.memory.jvm-metaspace.size, 512mb 14:55:55.540 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.memory.jvm-overhead.fraction, 0.1 14:55:55.540 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.memory.jvm-overhead.min, 192mb 14:55:55.540 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.memory.jvm-overhead.max, 512mb 14:55:55.540 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.memory.off-heap.size, 512mb 14:55:55.540 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.memory.process.size, 80gb 14:55:55.540 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.memory.jvm-metaspace.size, 1gb 14:55:55.540 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.memory.jvm-overhead.fraction, 0.1 14:55:55.540 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.memory.jvm-overhead.min, 192mb 14:55:55.540 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.memory.jvm-overhead.max, 1gb 14:55:55.540 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.memory.segment-size, 128kb 14:55:55.540 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.memory.managed.fraction, 0.4 14:55:55.540 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.memory.managed.size, 1gb 14:55:55.541 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.memory.network.fraction, 0.1 14:55:55.541 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.memory.network.min, 1gb 14:55:55.541 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.memory.network.max, 8gb 14:55:55.541 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.memory.framework.off-heap.size, 1gb 14:55:55.541 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.memory.task.off-heap.size, 8gb 14:55:55.541 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.memory.framework.heap.size, 1gb 14:55:55.541 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability, zookeeper 14:55:55.541 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.storageDir, bos://flink-bucket/flink/ha 14:55:55.541 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.quorum, bjhw-aisecurity-cassandra01.bjhw:9681,bjhw-aisecurity-cassandra02.bjhw:9681,bjhw-aisecurity-cassandra03.bjhw:9681,bjhw-aisecurity-cassandra04.bjhw:9681,bjhw-aisecurity-cassandra05.bjhw:9681 14:55:55.541 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.path.root, /flink 14:55:55.541 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.cluster-id, opera_upd_FlinkxxxLogJob1 14:55:55.541 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: web.checkpoints.history, 100 14:55:55.541 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: state.checkpoints.num-retained, 100 14:55:55.541 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: state.checkpoints.dir, bos://flink-bucket/flink/default-checkpoints 14:55:55.541 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: state.savepoints.dir, bos://flink-bucket/flink/default-savepoints 14:55:55.542 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.execution.failover-strategy, region 14:55:55.542 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: web.submit.enable, false 14:55:55.542 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.archive.fs.dir, bos://flink-bucket/flink/completed-jobs/opera_upd_FlinkxxxLogJob1 14:55:55.542 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: historyserver.archive.fs.dir, bos://flink-bucket/flink/completed-jobs/opera_upd_FlinkxxxLogJob1 14:55:55.542 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: historyserver.archive.fs.refresh-interval, 10000 14:55:55.542 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.port, 8600 14:55:55.542 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: historyserver.web.port, 8700 14:55:55.542 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.jobmanager.port, 2000 14:55:55.542 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: blob.server.port, 2


参考答案:

基于公司自研的pass平台部署,3个机器,pass自带recover。 正常运作中,直接重启pass容器,导致任务失败,等容器重启后,3个机器就都处于类似的无限循环状态。 目前初步分析是因为JobManager启动失败,进而由pass平台自动重启容器,然后无限循环了。

这里(1)为什么恢复任务失败会导致JobManager进程失败。(2)任务恢复失败从日志来看是因为flink的ha目录下确实部分文件,这个是什么原因呢?不排除是文件系统原因,目前用的bos://是百度的对象服务,想知道如果这个没写成功会显示检查点成功嘛,至少我操作重启前任务的检查点是成功的。之前倒是没注意去看是否这个目录一直没东西。 *来自志愿者整理的flink邮件归档



关于本问题的更多回答可点击进行查看:

https://developer.aliyun.com/ask/370064?spm=a2c6h.13066369.question.28.33bf585fmEG5LV



问题四:flink sql 如何实时计算百分比

flink1.10.1,应该如何计算error_1006_cnt_permillage

sql如下:

SELECT | DATE_FORMAT(TIMESTAMPADD(HOUR, 8, TUMBLE_START(proctime, INTERVAL '10' SECOND)), 'yyyy-MM-dd') day, | UNIX_TIMESTAMP(DATE_FORMAT(TIMESTAMPADD(HOUR, 8, TUMBLE_START(proctime, INTERVAL '10' SECOND)), 'yyyy-MM-dd HH:mm:ss')) * 1000 AS time, | CAST(COUNT(res_code) AS INT) AS request_cnt, | CAST(COUNT(res_code) FILTER(WHERE res_code = '500') AS INT) AS error_500_cnt, | CAST(COUNT(res_code) FILTER(WHERE res_code = '1002') AS INT) AS error_1002_cnt, | CAST(COUNT(res_code) FILTER(WHERE res_code = '1003') AS INT) AS error_1003_cnt, | CAST(COUNT(res_code) FILTER(WHERE res_code = '1004') AS INT) AS error_1004_cnt, | CAST(COUNT(res_code) FILTER(WHERE res_code = '1005') AS INT) AS error_1005_cnt, | CAST(COUNT(res_code) FILTER(WHERE res_code = '1006') AS INT) AS error_1006_cnt, | CAST(COUNT(res_code) FILTER(WHERE res_code = '1006')*1.0/COUNT(res_code)*100 as numeric(10,1)) error_1006_cnt_permillage | FROM | ${databaseName}.metric_stream | WHERE | metric = 'http_common_request' | GROUP BY | TUMBLE(proctime, INTERVAL '10' SECOND)Exception in thread "main" org.apache.flink.table.planner.codegen.CodeGenException: Unsupported cast from 'ROW' to 'ROW'. at org.apache.flink.table.planner.codegen.calls.ScalarOperatorGens$.generateCast(ScalarOperatorGens.scala:1284) at org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateCallExpression(ExprCodeGenerator.scala:690) at org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:485) at org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:51) at org.apache.calcite.rex.RexCall.accept(RexCall.java:191) at org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateExpression(ExprCodeGenerator.scala:131) at org.apache.flink.table.planner.codegen.CalcCodeGenerator$$anonfun$5.apply(CalcCodeGenerator.scala:152) at org.apache.flink.table.planner.codegen.CalcCodeGenerator$$anonfun$5.apply(CalcCodeGenerator.scala:152) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at scala.collection.AbstractTraversable.map(Traversable.scala:104) at org.apache.flink.table.planner.codegen.CalcCodeGenerator$.produceProjectionCode$1(CalcCodeGenerator.scala:152) at org.apache.flink.table.planner.codegen.CalcCodeGenerator$.generateProcessCode(CalcCodeGenerator.scala:179) at org.apache.flink.table.planner.codegen.CalcCodeGenerator$.generateCalcOperator(CalcCodeGenerator.scala:49) at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalc.translateToPlanInternal(StreamExecCalc.scala:77) at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalc.translateToPlanInternal(StreamExecCalc.scala:39) at org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:58) at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalcBase.translateToPlan(StreamExecCalcBase.scala:38) at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToTransformation(StreamExecSink.scala:184) at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.scala:118) at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.scala:48) at org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:58) at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlan(StreamExecSink.scala:48) at org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:60) at org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:59) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.Iterator$class.foreach(Iterator.scala:893) at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at scala.collection.AbstractTraversable.map(Traversable.scala:104) at org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:59) at org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:153) at org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:685) at org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlUpdate(TableEnvironmentImpl.java:495)*来自志愿者整理的flink邮件归档



参考答案:

我理解这边两个问题。 1. Row 2 Row 的转换在 1.12 支持了:https://issues.apache.org/jira/browse/FLINK-17049 2. 这个 Select 语句貌似不会产生这个错误,方便发个完整的不*来自志愿者整理的flink邮件归档



关于本问题的更多回答可点击进行查看:

https://developer.aliyun.com/ask/370065?spm=a2c6h.13066369.question.31.33bf585fxIQKgj



问题五:在map算子中对redis进行sadd写入之后再sget读取

对于redis作为数据源或者sink网上有很多参照的案例,那么请问如何在一个map算子里面,先写入set结构中,然后再读取该set的长度呢? 业务需要,百思不得解,还望各位指点迷津!*来自志愿者整理的flink邮件归档



参考答案:

没搞懂你怎么个不得解,是不去除redis的sdk使用还是咋的,问题描述太简单了。

这个貌似就是map内你通过redis client操作redis就好啦呀。*来自志愿者整理的flink邮件归档



关于本问题的更多回答可点击进行查看:

https://developer.aliyun.com/ask/370066?spm=a2c6h.13066369.question.30.33bf585fRDKyI6

相关实践学习
基于Hologres轻松玩转一站式实时仓库
本场景介绍如何利用阿里云MaxCompute、实时计算Flink和交互式分析服务Hologres开发离线、实时数据融合分析的数据大屏应用。
Linux入门到精通
本套课程是从入门开始的Linux学习课程,适合初学者阅读。由浅入深案例丰富,通俗易懂。主要涉及基础的系统操作以及工作中常用的各种服务软件的应用、部署和优化。即使是零基础的学员,只要能够坚持把所有章节都学完,也一定会受益匪浅。
相关文章
|
1月前
|
消息中间件 缓存 关系型数据库
Flink CDC产品常见问题之upsert-kafka增加参数报错如何解决
Flink CDC(Change Data Capture)是一个基于Apache Flink的实时数据变更捕获库,用于实现数据库的实时同步和变更流的处理;在本汇总中,我们组织了关于Flink CDC产品在实践中用户经常提出的问题及其解答,目的是辅助用户更好地理解和应用这一技术,优化实时数据处理流程。
|
1月前
|
Oracle 关系型数据库 MySQL
flink cdc 插件问题之报错如何解决
Flink CDC(Change Data Capture)是一个基于Apache Flink的实时数据变更捕获库,用于实现数据库的实时同步和变更流的处理;在本汇总中,我们组织了关于Flink CDC产品在实践中用户经常提出的问题及其解答,目的是辅助用户更好地理解和应用这一技术,优化实时数据处理流程。
|
1月前
|
Java 关系型数据库 MySQL
Flink1.18.1和CDC2.4.1 本地没问题 提交任务到服务器 报错java.lang.NoClassDefFoundError: Could not initialize class io.debezium.connector.mysql.MySqlConnectorConfig
【2月更文挑战第33天】Flink1.18.1和CDC2.4.1 本地没问题 提交任务到服务器 报错java.lang.NoClassDefFoundError: Could not initialize class io.debezium.connector.mysql.MySqlConnectorConfig
52 2
|
1月前
|
Java 关系型数据库 MySQL
Flink CDC有见这个报错不?
【2月更文挑战第29天】Flink CDC有见这个报错不?
22 2
|
1月前
|
存储 关系型数据库 MySQL
Flink CDC产品常见问题之写hudi的时候报错如何解决
Flink CDC(Change Data Capture)是一个基于Apache Flink的实时数据变更捕获库,用于实现数据库的实时同步和变更流的处理;在本汇总中,我们组织了关于Flink CDC产品在实践中用户经常提出的问题及其解答,目的是辅助用户更好地理解和应用这一技术,优化实时数据处理流程。
|
1月前
|
资源调度 关系型数据库 测试技术
Flink CDC产品常见问题之没有报错但是一直监听不到数据如何解决
Flink CDC(Change Data Capture)是一个基于Apache Flink的实时数据变更捕获库,用于实现数据库的实时同步和变更流的处理;在本汇总中,我们组织了关于Flink CDC产品在实践中用户经常提出的问题及其解答,目的是辅助用户更好地理解和应用这一技术,优化实时数据处理流程。
|
1月前
|
缓存 监控 Java
Flink CDC产品常见问题之flink集群jps命令报错如何解决
Flink CDC(Change Data Capture)是一个基于Apache Flink的实时数据变更捕获库,用于实现数据库的实时同步和变更流的处理;在本汇总中,我们组织了关于Flink CDC产品在实践中用户经常提出的问题及其解答,目的是辅助用户更好地理解和应用这一技术,优化实时数据处理流程。
|
1月前
|
Oracle 关系型数据库 MySQL
Flink CDC产品常见问题之用superset连接starrocks报错如何解决
Flink CDC(Change Data Capture)是一个基于Apache Flink的实时数据变更捕获库,用于实现数据库的实时同步和变更流的处理;在本汇总中,我们组织了关于Flink CDC产品在实践中用户经常提出的问题及其解答,目的是辅助用户更好地理解和应用这一技术,优化实时数据处理流程。
|
1月前
|
存储 资源调度 关系型数据库
Flink CDC产品常见问题之yarn-session提交失败如何解决
Flink CDC(Change Data Capture)是一个基于Apache Flink的实时数据变更捕获库,用于实现数据库的实时同步和变更流的处理;在本汇总中,我们组织了关于Flink CDC产品在实践中用户经常提出的问题及其解答,目的是辅助用户更好地理解和应用这一技术,优化实时数据处理流程。
|
1月前
|
Oracle 关系型数据库 MySQL
flink cdc 增量问题之增量数据会报错如何解决
Flink CDC(Change Data Capture)是一个基于Apache Flink的实时数据变更捕获库,用于实现数据库的实时同步和变更流的处理;在本汇总中,我们组织了关于Flink CDC产品在实践中用户经常提出的问题及其解答,目的是辅助用户更好地理解和应用这一技术,优化实时数据处理流程。

相关产品

  • 实时计算 Flink版