开发者社区 > 大数据与机器学习 > 实时计算 Flink > 正文

Flink CDC 为何oracle的scn号在每次checkpoint的时候不更新呢?

"Flink CDC 为何oracle的scn号在每次checkpoint的时候不更新呢?mysql的cdc, checkpoint难道不更新类似oracle的scn吗? 不更新咋断点续传呢?第一次启动flink job 同步任务的时间, 这个oracle的scn不是时刻更新吗?? 怎么是第一次启动的scn号呢??
SQL> select THREAD#, SEQUENCE#, RESETLOGS_ID, FIRST_CHANGE#, NEXT_CHANGE#, FIRST_TIME, NEXTTIME from sys.v$archived_log where DEST_ID=1 and FIRST_CHANGE# <=23867001247 and NEXT_CHANGE#>=23867001247 oracle cdc 的flink 同步任务,

flink checkpoint的配置如下:

execution.checkpointing.interval: 5min
execution.checkpointing.externalized-checkpoint-retention: DELETE_ON_CANCELLATION
execution.checkpointing.max-concurrent-checkpoints: 1
execution.checkpointing.min-pause: 500
execution.checkpointing.mode: EXACTLY_ONCE
execution.checkpointing.timeout: 10min
execution.checkpointing.tolerable-failed-checkpoints: 20
state.backend.type: filesystem
state.checkpoints.dir: file:///mnt/nfs/flink/checkpoints
state.savepoints.dir: file:///mnt/nfs/flink/savepoints

中断自动重启续传时, 报错
Caused by: io.debezium.DebeziumException: Online REDO LOG files or archive log files do not contain the offset scn 23867001247. Please perform a new snapshot.

oracle 查scn的生成时间,

SQL> select THREAD#, SEQUENCE#, RESETLOGS_ID, FIRST_CHANGE#, NEXT_CHANGE#, FIRST_TIME, NEXTTIME from sys.v$archived_log where DEST_ID=1 and FIRST_CHANGE# <=23867001247 and NEXT_CHANGE#>=23867001247;

THREAD# SEQUENCE# RESETLOGS_ID FIRST_CHANGE# NEXT_CHANGE# FIRST_TIM


NEXT_TIME

2 5756 1074621776 2.3448E+10 2.3479E+10 09-APR-24
10-APR-24

1 4577 1074621776 2.3465E+10 2.3479E+10 09-APR-24
10-APR-24"

展开
收起
真的很搞笑 2024-05-14 18:17:19 67 0
0 条回答
写回答
取消 提交回答

实时计算Flink版是阿里云提供的全托管Serverless Flink云服务,基于 Apache Flink 构建的企业级、高性能实时大数据处理系统。提供全托管版 Flink 集群和引擎,提高作业开发运维效率。

相关产品

  • 实时计算 Flink版
  • 相关电子书

    更多
    PostgresChina2018_樊文凯_ORACLE数据库和应用异构迁移最佳实践 立即下载
    PostgresChina2018_王帅_从Oracle到PostgreSQL的数据迁移 立即下载
    Oracle云上最佳实践 立即下载

    相关镜像