问题一:实时计算Flink报错org.codehaus.janino.CompilerFactory cannot be cast to org.codehaus.commons.compiler.ICompilerFactory
实时计算Flink报错org.codehaus.janino.CompilerFactory cannot be cast to org.codehaus.commons.compiler.ICompilerFactory
参考答案:
关于本问题的更多回答可点击进行查看:
https://developer.aliyun.com/ask/475322?spm=a2c6h.13066369.question.28.6f064d5cIWWXoh
问题二:Flink报错doesn't support consuming update and delete changes which is produced by node TableSourceScandoesn't support consuming update and delete changes which is produced by node TableSourceScan
Flink报错doesn't support consuming update and delete changes which is produced by node TableSourceScandoesn't support consuming update and delete changes which is produced by node TableSourceScan
参考答案:
语法校验报错,append类型sink无法接收上游update记录。
使用支持写入update记录的sink,如upsert-kafka等。
关于本问题的更多回答可点击进行查看:
https://developer.aliyun.com/ask/476340?spm=a2c6h.13066369.question.29.6f064d5cyfSb2X
问题三:Flink报错:org.apache.flink.table.api.ValidationException: SQL validation failed. Unable to create a sink for writing table 'xxx'. The cause is following: Unsupported options found for 'sls'.
Flink报错:org.apache.flink.table.api.ValidationException: SQL validation failed. Unable to create a sink for writing table 'xxx'. The cause is following: Unsupported options found for 'sls'.
参考答案:
【报错详情】
Caused by: org.apache.flink.table.api.ValidationException: Unsupported options found for 'sls'.
Unsupported options: xxx Supported options: accessid accesskey baseretrybackofftimems batchgetsize connector consumergroup directmode disabledirectmode endpoint endtime exitafterfinish failonerror fallback_to_old flushintervalms iothreadnum logstore maxblocktimems maxretries
maxretrybackofftimems maxretrytimes nullreplacestr partitionfield project property-version sourcefield starttime starttimems stoptime timefield timezone topicfield
【报错原因】
SLS对应的with参数出现不支持的with参数xxx,或可能写错参数名称导致该异常出现。
【解决方案】
根据官网文档DDL数据定义语句文档进行检查,是否支持对应的参数,或参数名称写的有误。
关于本问题的更多回答可点击进行查看:
https://developer.aliyun.com/ask/476359?spm=a2c6h.13066369.question.28.6f064d5cl84DEt
问题四:Flink报错java.lang.ClassCastException: org.codehaus.janino.CompilerFactory cannot be cast to org.codehaus.commons.compiler.ICompilerFactory
Flink报错java.lang.ClassCastException: org.codehaus.janino.CompilerFactory cannot be cast to org.codehaus.commons.compiler.ICompilerFactory
参考答案:
本质原因是用户的jar包中,也引入了跟我们冲突的janino依赖。分析下用户jar里面是否有org.codehaus.janino.CompilerFactory。由于在不同机器上的class加载顺序不一样,因此有时候可以运行,有时候出现类冲突。
【解决方案】
在目标作业详情页面右上角,单击编辑后,在页面右侧高级配置面板的更多Flink配置中classloader.parent-first-patterns.additional: org.codehaus.janino
关于本问题的更多回答可点击进行查看:
https://developer.aliyun.com/ask/476351?spm=a2c6h.13066369.question.31.6f064d5c1wYwEp
问题五:Flink报错:Table sink 'a' doesn't support consuming update and delete changes which is produced by node
Flink报错:Table sink 'a' doesn't support consuming update and delete changes which is produced by node
参考答案:
Flink语法检查报错详情:org.apache.flink.table.api.TableException: Table sink 'xxx' doesn't support consuming update and delete changes which is produced by node xxx(xxx) at
org.apache.flink.table.planner.plan.optimize.traitinference.SatisfyModifyKindSetTraitVisitor.applyTraitToWrapper(SatisfyModifyKindSetTraitVisitor.java:493) at
org.apache.flink.table.planner.plan.optimize.traitinference.SatisfyModifyKindSetTraitVisitor.visit(SatisfyModifyKindSetTraitVisitor.java:345)。
报错原因:append only类型结果表存储(kafka、sls、datahub 等)无法接收上游 update(retract/撤回)记录。
解决方案:上游涉及retract,如:双流 left join、last value、last row、双层 groupAGG等,请使用支持写入update记录的 sink,如upsert kafka、rds、Hologres、hbase等支持主键更新的存储做结果表。
关于本问题的更多回答可点击进行查看:
https://developer.aliyun.com/ask/476358?spm=a2c6h.13066369.question.30.6f064d5cjrFQFQ