开发者社区 > 大数据与机器学习 > 实时计算 Flink > 正文

Flink CDC是不是日志太快了补不过来?

Flink CDC是不是日志太快了补不过来?image.png
[] - Source: fnd_concurrent_requests[1] -> ConstraintEnforcer[2] (1/1)#0 (414a2358fd896e15a223d6fe84ad29b3_cbc357ccb763df2852fee8c4fc7d55f2_0_0) switched from RUNNING to FAILED with failure cause:
java.lang.RuntimeException: One or more fetchers have encountered exception
at org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:261) ~[flink-connector-files-1.17.2.jar:1.17.2]
at org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:169) ~[flink-connector-files-1.17.2.jar:1.17.2]
at org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:131) ~[flink-connector-files-1.17.2.jar:1.17.2]
at org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:419) ~[flink-dist-1.17.2.jar:1.17.2]
at org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68) ~[flink-dist-1.17.2.jar:1.17.2]
at org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65) ~[flink-dist-1.17.2.jar:1.17.2]
at org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:550) ~[flink-dist-1.17.2.jar:1.17.2]
at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:231) ~[flink-dist-1.17.2.jar:1.17.2]
at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:839) ~[flink-dist-1.17.2.jar:1.17.2]
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:788) ~[flink-dist-1.17.2.jar:1.17.2]
at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:952) ~[flink-dist-1.17.2.jar:1.17.2]
at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:931) [flink-dist-1.17.2.jar:1.17.2]
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:745) [flink-dist-1.17.2.jar:1.17.2]
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562) [flink-dist-1.17.2.jar:1.17.2]
at java.lang.Thread.run(Thread.java:750) [?:1.8.0_333]
Caused by: java.lang.RuntimeException: SplitFetcher thread 51 received unexpected exception while polling the records
at org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:165) ~[flink-connector-files-1.17.2.jar:1.17.2]
at org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:114) ~[flink-connector-files-1.17.2.jar:1.17.2]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_333]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_333]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_333]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_333]
... 1 more
Caused by: com.ververica.cdc.connectors.shaded.org.apache.kafka.connect.errors.ConnectException: An exception occurred in the change event producer. This connector will be stopped.
at io.debezium.pipeline.ErrorHandler.setProducerThrowable(ErrorHandler.java:50) ~[flink-sql-connector-oracle-cdc-2.4.2.jar:2.4.2]
at io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:261) ~[flink-sql-connector-oracle-cdc-2.4.2.jar:2.4.2]
at com.ververica.cdc.connectors.oracle.source.reader.fetch.OracleStreamFetchTask$RedoLogSplitReadTask.execute(OracleStreamFetchTask.java:134) ~[flink-sql-connector-oracle-cdc-2.4.2.jar:2.4.2]
at com.ververica.cdc.connectors.oracle.source.reader.fetch.OracleScanFetchTask.execute(OracleScanFetchTask.java:139) ~[flink-sql-connector-oracle-cdc-2.4.2.jar:2.4.2]
at com.ververica.cdc.connectors.base.source.reader.external.IncrementalSourceScanFetcher.lambda$submitTask$1(IncrementalSourceScanFetcher.java:98) ~[flink-sql-connector-oracle-cdc-2.4.2.jar:2.4.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_333]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_333]
... 1 more
Caused by: com.ververica.cdc.connectors.shaded.org.apache.kafka.connect.errors.DataException: file is not a valid field name
at com.ververica.cdc.connectors.shaded.org.apache.kafka.connect.data.Struct.lookupField(Struct.java:254) ~[flink-sql-connector-oracle-cdc-2.4.2.jar:2.4.2]
at com.ververica.cdc.connectors.shaded.org.apache.kafka.connect.data.Struct.getCheckType(Struct.java:261) ~[flink-sql-connector-oracle-cdc-2.4.2.jar:2.4.2]

展开
收起
真的很搞笑 2023-12-05 20:52:59 53 0
2 条回答
写回答
取消 提交回答
  • 面对过去,不要迷离;面对未来,不必彷徨;活在今天,你只要把自己完全展示给别人看。

    Flink CDC日志太快,无法补足。这可能是由于以下原因:

    1. 任务并行度过高:如果任务并行度过高,可能会导致日志生成速度跟不上处理速度,从而导致日志堆积。可以尝试降低任务并行度,以减轻日志生成的压力。

    2. Checkpoint频率过高:Checkpoint是Flink用于保存状态和恢复任务的重要机制。如果Checkpoint频率过高,可能会导致日志堆积。可以尝试降低Checkpoint频率,以减轻日志生成的压力。

    3. 网络延迟:如果网络延迟较高,可能会导致日志生成速度跟不上处理速度,从而导致日志堆积。可以尝试优化网络环境,以提高日志生成的速度。

    2023-12-06 14:27:52
    赞同 展开评论 打赏
  • 把schema,用户名,数据库改成大写就解决了 ,此回答整理自钉群“Flink CDC 社区”

    2023-12-06 13:32:55
    赞同 展开评论 打赏

实时计算Flink版是阿里云提供的全托管Serverless Flink云服务,基于 Apache Flink 构建的企业级、高性能实时大数据处理系统。提供全托管版 Flink 集群和引擎,提高作业开发运维效率。

相关产品

  • 实时计算 Flink版
  • 相关电子书

    更多
    PostgresChina2018_赖思超_PostgreSQL10_hash索引的WAL日志修改版final 立即下载
    Kubernetes下日志实时采集、存储与计算实践 立即下载
    日志数据采集与分析对接 立即下载