开发者社区> 问答> 正文

1.1.1 版本Kafka 不通

与没有诊断工具?

$cat conf/mq.yml servers: 127.0.0.1:9092 #for rocketmq: means the nameserver retries: 0 batchSize: 16384 lingerMs: 1 bufferMemory: 33554432

Canal的batch size, 默认50K, 由于kafka最大消息体限制请勿超过1M(900K以下)

canalBatchSize: 50

Canal get数据的超时时间, 单位: 毫秒, 0为不限超时

canalGetTimeout: 100 flatMessage: true

canalDestinations: - canalDestination: example topic: example partition: 1

#对应topic分区数量

partitionsNum: 3

partitionHash:

#库名.表名: 唯一主键

mytest.person: id

cat conf/canal.properties ################################################# ######### common argument ############# ################################################# canal.id= 1 canal.ip= canal.port=11111 canal.metrics.pull.port=11112 canal.zkServers=127.0.0.1:2181

flush data to zk

canal.zookeeper.flush.period = 1000 canal.withoutNetty = false

tcp, kafka, RocketMQ

canal.serverMode = kafka

flush meta cursor/parse position to file

canal.file.data.dir = ${canal.conf.dir} canal.file.flush.period = 1000

memory store RingBuffer size, should be Math.pow(2,n)

canal.instance.memory.buffer.size = 16384

memory store RingBuffer used memory unit size , default 1kb

canal.instance.memory.buffer.memunit = 1024

meory store gets mode used MEMSIZE or ITEMSIZE

canal.instance.memory.batch.mode = MEMSIZE canal.instance.memory.rawEntry = true

detecing config

canal.instance.detecting.enable = false #canal.instance.detecting.sql = insert into retl.xdual values(1,now()) on duplicate key update x=now() canal.instance.detecting.sql = select 1 canal.instance.detecting.interval.time = 3 canal.instance.detecting.retry.threshold = 3 canal.instance.detecting.heartbeatHaEnable = false

support maximum transaction size, more than the size of the transaction will be cut into multiple transactions delivery

canal.instance.transaction.size = 1024

mysql fallback connected to new master should fallback times

canal.instance.fallbackIntervalInSeconds = 60

network config

canal.instance.network.receiveBufferSize = 16384 canal.instance.network.sendBufferSize = 16384 canal.instance.network.soTimeout = 30

binlog filter config

canal.instance.filter.druid.ddl = true canal.instance.filter.query.dcl = false canal.instance.filter.query.dml = false canal.instance.filter.query.ddl = false canal.instance.filter.table.error = false canal.instance.filter.rows = false canal.instance.filter.transaction.entry = false

binlog format/image check

canal.instance.binlog.format = ROW,STATEMENT,MIXED canal.instance.binlog.image = FULL,MINIMAL,NOBLOB

binlog ddl isolation

canal.instance.get.ddl.isolation = false

parallel parser config

canal.instance.parser.parallel = true

concurrent thread number, default 60% available processors, suggest not to exceed Runtime.getRuntime().availableProcessors()

#canal.instance.parser.parallelThreadSize = 16

disruptor ringbuffer size, must be power of 2

canal.instance.parser.parallelBufferSize = 256

table meta tsdb info

canal.instance.tsdb.enable=true canal.instance.tsdb.dir=${canal.file.data.dir:../conf}/${canal.instance.destination:} canal.instance.tsdb.url=jdbc:h2:${canal.instance.tsdb.dir}/h2;CACHE_SIZE=1000;MODE=MYSQL; canal.instance.tsdb.dbUsername=canal canal.instance.tsdb.dbPassword=canal

dump snapshot interval, default 24 hour

canal.instance.tsdb.snapshot.interval=24

purge snapshot expire , default 360 hour(15 days)

canal.instance.tsdb.snapshot.expire=360

rds oss binlog account

canal.instance.rds.accesskey = canal.instance.rds.secretkey =

################################################# ######### destinations ############# ################################################# canal.destinations= example

conf root dir

canal.conf.dir = ../conf

auto scan instance dir add/remove and start/stop instance

canal.auto.scan = true canal.auto.scan.interval = 5

#canal.instance.tsdb.spring.xml=classpath:spring/tsdb/h2-tsdb.xml #canal.instance.tsdb.spring.xml=classpath:spring/tsdb/mysql-tsdb.xml

canal.instance.global.mode = spring canal.instance.global.lazy = false #canal.instance.global.manager.address = 127.0.0.1:1099 #canal.instance.global.spring.xml = classpath:spring/memory-instance.xml canal.instance.global.spring.xml = classpath:spring/file-instance.xml #canal.instance.global.spring.xml = classpath:spring/default-instance.xml

原提问者GitHub用户xiazemin

展开
收起
Java工程师 2023-05-08 17:55:27 122 0
1 条回答
写回答
取消 提交回答
  • canalDestinations: - canalDestination: example topic: example partition: 1

    partition是从0开始的,你看一下你kafka topic的partition是不是只有一个, 如果只有一个 partition就填0或者不填

    你把logback级别改成INFO kafka消息发送成功的话会打印下面的日志:

    INFO c.a.otter.canal.server.embedded.CanalServerWithEmbedded - getWithoutAck successfully, clientId:1001 batchSize:50 real size is 3 and result is ...... INFO c.a.otter.canal.server.embedded.CanalServerWithEmbedded - ack successfully, clientId:1001 batchId:4 position:PositionRange ......

    或者你断点单步调试一下, CanalKafkaProducer 的send方法

    原回答者GitHub用户rewerma

    2023-05-09 18:45:26
    赞同 展开评论 打赏
问答排行榜
最热
最新

相关电子书

更多
Java Spring Boot开发实战系列课程【第16讲】:Spring Boot 2.0 实战Apache Kafka百万级高并发消息中间件与原理解析 立即下载
MaxCompute技术公开课第四季 之 如何将Kafka数据同步至MaxCompute 立即下载
消息队列kafka介绍 立即下载