问题一:Flink sql tinyint类型使用in 报错
我使用flink sql cdc取连接 mysql表的时候,当我的mysql表type这个字段类型是tinyint类型时 使用type in(1,2,3,4,5)会报以下的错误,只有当我把字段类型改成int的时候才能使用in,这是符合预期的吗,当字段类型不匹配的时候 flink sql不会自动转换类型吗?
[ERROR] Could not execute SQL statement. Reason: org.codehaus.commons.compiler.CompileException: Line 6, Column 88: No applicable constructor/method found for actual parameters "int"; candidates are: "org.apache.flink.table.runtime.util.collections.ByteHashSet()*来自志愿者整理的flink邮件归档
参考答案:
从你的报错来看,是 in 不支持隐式 CAST。 你要么可以把 type 定义成 INT,要不把后面的值 CAST 成 TINYINT。*来自志愿者整理的flink邮件归档
关于本问题的更多回答可点击进行查看:
https://developer.aliyun.com/ask/371595?spm=a2c6h.13066369.question.67.6ad26382ML38lF
问题二:flink savepoint 异常
本人用flink 1.10.1版本进行savepoint时遇到下列错误,暂时不清楚错误的原因,特来寻求帮助,麻烦大佬们看看
已经排除反压和重启的原因,checkpoint超时设置了十分钟,conf配置增加客户端连接master的时间,但还是出现异常。
命令
flink savepoint -yid application_1604456903594_2381 fb8131bcb78cbdf2bb9a705d8a4ceebc hdfs:///hadoopnamenodeHA/flink/flink-savepoints
异常
The program finished with the following exception:
org.apache.flink.util.FlinkException: Triggering a savepoint for the job fb8131bcb78cbdf2bb9a705d8a4ceebc failed. at org.apache.flink.client.cli.CliFrontend.triggerSavepoint(CliFrontend.java:631) at org.apache.flink.client.cli.CliFrontend.lambda$savepoint$9(CliFrontend.java:609) at org.apache.flink.client.cli.CliFrontend.runClusterAction(CliFrontend.java:841) at org.apache.flink.client.cli.CliFrontend.savepoint(CliFrontend.java:606) at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:908) at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:966) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836) at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:966) Caused by: java.util.concurrent.CompletionException: java.util.concurrent.CompletionException: org.apache.flink.runtime.checkpoint.CheckpointException: Failed to trigger savepoint. Failure reason: An Exception occurred while triggering the checkpoint. at org.apache.flink.runtime.scheduler.SchedulerBase.lambda$triggerSavepoint$3(SchedulerBase.java:756) at java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822) at java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797) at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442) at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:402) at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:195) at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74) at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152) at akka.japi.pf.UnitCaseState*来自志愿者整理的flink邮件归档
参考答案:
异常信息中有 “Failed to trigger savepoint. Failure reason: An Exception occurred while triggering the checkpoint.” 或许你可以看看 JM 的日志,找一下看看有没有什么详细日志 Best, Congxian*来自志愿者整理的flink邮件归档
关于本问题的更多回答可点击进行查看:
https://developer.aliyun.com/ask/371597?spm=a2c6h.13066369.question.70.6ad26382KWj5YN
问题三:flink内存超用问题
请教下,我有个flink任务经常因为内存超用被yarn 集群kill,不知道该怎么排查问题,flink版本1.11.0,启动命令为:
bin/flink run -m yarn-cluster -yjm 2048m -ytm 8192m -ys 2 xxx.jar,使用rocksdb状态后端,设置的参数有taskmanager.memory.managed.fraction=0.6;taskmanager.memory.jvm-overhead.fraction=0.2。下面是某个时刻flink页面的taskmanage统计。请问内存超用可能是来自什么地方呢,感觉程序中并没用用到第三方jar使用大量native,自己程序里也没有用native内存的地方
Free Slots / All Slots:0 / 2
CPU Cores:24
Physical Memory:251 GB
JVM Heap Size:1.82 GB
Flink Managed Memory:4.05 GB
Memory
JVM (Heap/Non-Heap)
Type
Committed
Used
Maximum
Heap1.81 GB1.13 GB1.81 GB
Non-Heap169 MB160 MB1.48 GB
Total1.98 GB1.29 GB3.30 GB
Outside JVM
Type
Count
Used
Capacity
Direct24,493718 MB718 MB
Mapped00 B0 B
Network
Memory Segments
Type
Count
Available21,715
Total22,118
Garbage Collection
Collector
Count
Time
PS_Scavenge19917,433
PS_MarkSweep44,173*来自志愿者整理的flink邮件归档
参考答案:
可以设置下参数 'state.backend.rocksdb.memory.fixed-per-slot' [1] 看下有没有效果。 [1] https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/config.html#state-backend-rocksdb-memory-fixed-per-slot*来自志愿者整理的flink邮件归档
关于本问题的更多回答可点击进行查看:
https://developer.aliyun.com/ask/371598?spm=a2c6h.13066369.question.69.6ad26382lJDUVH
问题四:flink sql kafka connector with avro confluent sch
flink sql 1.11.2 支持 confluent schema registry 下 avro格式的kafka connector吗?
官网没找到相关资料。有的话请告知或者提供一下示例,谢谢!*来自志愿者整理的flink邮件归档
参考答案:
支持的,参考 code https://github.com/apache/flink/pull/12919/commits*来自志愿者整理的flink邮件归档
关于本问题的更多回答可点击进行查看:
https://developer.aliyun.com/ask/371601?spm=a2c6h.13066369.question.70.6ad26382j8IqjW
问题五:关于flink任务挂掉报警的监控指标选择
请问各位大佬,我基于grafana+prometheus构建的Flink监控,现在想实现flink任务挂掉后,grafana就发出报警的功能,但是目前不知道该用什么指标去监控,我之前想监控flink_jobmanager_job_uptime这个指标,设置的监控规则是:max_over_time(flink_jobmanager_job_uptime[1m])
min_over_time(flink_jobmanager_job_uptime[1m])的差小于等于0就报警,但是任务刚启动,会有误报,想请教下有没有更好的办法*来自志愿者整理的flink邮件归档
参考答案:
可以配置任务重启告警, flink任务挂掉之后会自动尝试重启 如果是固定任务数量的话, 还可以配置slot数量告警*来自志愿者整理的flink邮件归档
关于本问题的更多回答可点击进行查看:
https://developer.aliyun.com/ask/371603?spm=a2c6h.13066369.question.71.6ad263825NiW1n