我的环境是: flink-sql-connector-mysql-cdc-2.4.1.jar hudi-flink1.16-bundle-0.13.1.jar,从mysql同步到MOR表时报以上错误,同步COW表不会报错,请问是啥问题呀?谢谢org.apache.flink.util.flinkexception: Global failure triggered by OperatorCoordinator for 'stream_write: ods_gj_hc_delta_mor' (operator 79f5c3ad256ef41ba5e107f81e592f50).at org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder$LazyInitializedCoordinatorContext.failJob(OperatorCoordinatorHolder.java:617)at org.apache.hudi.sink.StreamWriteOperatorCoordinator.lambda$start$0(StreamWriteOperatorCoordinator.java:191)at org.apache.hudi.sink.utils.NonThrownExecutor.handleException(NonThrownExecutor.java:142)at org.apache.hudi.sink.utils.NonThrownExecutor.lambda$wrapAction$0(NonThrownExecutor.java:133)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)at java.lang.Thread.run(Thread.java:750)Caused by: org.apache.hudi.exception.HoodieException: Executor executes action [commits the instant 20230728092355063] error... 6 moreCaused by: java.lang.NoSuchMethodError: org.apache.hudi.org.apache.avro.specific.SpecificRecordBuilderBase.(Lorg/apache/hudi/org/apache/avro/Schema;Lorg/apache/hudi/org/apache/avro/specific/SpecificData;)Vat org.apache.hudi.avro.model.HoodieCompactionOperation$Builder.(HoodieCompactionOperation.java:318)at org.apache.hudi.avro.model.HoodieCompactionOperation$Builder.(HoodieCompactionOperation.java:305)at org.apache.hudi.avro.model.HoodieCompactionOperation.newBuilder(HoodieCompactionOperation.java:272)at org.apache.hudi.common.util.CompactionUtils.buildHoodieCompactionOperation(CompactionUtils.java:106)at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384)at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)at org.apache.hudi.table.action.compact.plan.generators.BaseHoodieCompactionPlanGenerator.generateCompactionPlan(BaseHoodieCompactionPlanGenerator.java:126)at org.apache.hudi.table.action.compact.ScheduleCompactionActionExecutor.scheduleCompaction(ScheduleCompactionActionExecutor.java:147)at org.apache.hudi.table.action.compact.ScheduleCompactionActionExecutor.execute(ScheduleCompactionActionExecutor.java:113)at org.apache.hudi.table.HoodieFlinkMergeOnReadTable.scheduleCompaction(HoodieFlinkMergeOnReadTable.java:105)at org.apache.hudi.client.BaseHoodieTableServiceClient.scheduleTableServiceInternal(BaseHoodieTableServiceClient.java:421)at org.apache.hudi.client.BaseHoodieTableServiceClient.scheduleTableService(BaseHoodieTableServiceClient.java:393)at org.apache.hudi.client.BaseHoodieWriteClient.scheduleTableService(BaseHoodieWriteClient.java:1097)at org.apache.hudi.client.BaseHoodieWriteClient.scheduleCompactionAtInstant(BaseHoodieWriteClient.java:876)at org.apache.hudi.client.BaseHoodieWriteClient.scheduleCompaction(BaseHoodieWriteClient.java:867)at org.apache.hudi.util.CompactionUtil.scheduleCompaction(CompactionUtil.java:65)at org.apache.hudi.sink.StreamWriteOperatorCoordinator.lambda$notifyCheckpointComplete$2(StreamWriteOperatorCoordinator.java:250)at org.apache.hudi.sink.utils.NonThrownExecutor.lambda$wrapAction$0(NonThrownExecutor.java:130)... 3 more
这个错误是由OperatorCoordinator触发的全局错误。通常情况下,这种错误是由于任务中某个算子的异常导致的。
https://help.aliyun.com/document_detail/183439.html?spm=a2c4g.11186623.0.i76
针对这个问题,可以按照以下步骤进行排查和解决:
查看任务日志:可以查看Flink任务的日志输出,以便进一步定位错误原因。可以在日志中查找更详细的错误信息,以及出错的算子和任务编号等。
检查数据源:需要检查数据源的连接和配置,确保从MySQL中读取数据的连接和权限等都正确。可以使用MySQL客户端工具连接到MySQL数据库,尝试查询和修改数据,以确认数据库的可用性和正确性。
检查目标数据源:需要检查目标数据源(即MOR表)的连接和配置,确保向其写入数据的连接和权限等都正确。可以使用相应的客户端工具连接到目标数据源,尝试查询和修改数据,以确认目标数据源的可用性和正确性。
检查数据格式:需要检查从MySQL中读取的数据格式和类型,以及写入MOR表的数据格式和类型,确保它们能够正确匹配。可以使用Flink的数据转换功能,进行数据格式和类型转换,以满足MOR表的要求。
检查算子逻辑:如果以上步骤都没有发现问题,那么就需要检查任务中涉及到的算子逻辑,查找可能存在的错误。可以使用Flink的调试工具,对算子的输入和输出数据进行检查和分析,以进一步定位错误原因。
需要根据具体情况来选择适合的解决方案,并在调试过程中注意记录日志和监控系统状态,以便更好地排查和调优。
版权声明:本文内容由阿里云实名注册用户自发贡献,版权归原作者所有,阿里云开发者社区不拥有其著作权,亦不承担相应法律责任。具体规则请查看《阿里云开发者社区用户服务协议》和《阿里云开发者社区知识产权保护指引》。如果您发现本社区中有涉嫌抄袭的内容,填写侵权投诉表单进行举报,一经查实,本社区将立刻删除涉嫌侵权内容。
实时计算Flink版是阿里云提供的全托管Serverless Flink云服务,基于 Apache Flink 构建的企业级、高性能实时大数据处理系统。提供全托管版 Flink 集群和引擎,提高作业开发运维效率。