开发者社区 问答 正文

kafka没数据,canal.log报错

environment canal 1.1.3

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=96m; support was removed in 8.0 Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0 Java HotSpot(TM) 64-Bit Server VM warning: UseCMSCompactAtFullCollection is deprecated and will likely be removed in a future release. 2019-01-03 13:52:43.519 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## set default uncaught exception handler 2019-01-03 13:52:43.597 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## load canal configurations 2019-01-03 13:52:43.623 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## start the canal server. 2019-01-03 13:52:43.997 [main] INFO com.alibaba.otter.canal.deployer.CanalController - ## start the canal server[192.168.122.1:11111] 2019-01-03 13:52:45.489 [main] WARN o.s.beans.GenericTypeAwarePropertyDescriptor - Invalid JavaBean property 'connectionCharset' being accessed! Ambiguous write methods found next to actually used [public void com.alibaba.otter.canal.parse.inbound.mysql.AbstractMysqlEventParser.setConnectionCharset(java.nio.charset.Charset)]: [public void com.alibaba.otter.canal.parse.inbound.mysql.AbstractMysqlEventParser.setConnectionCharset(java.lang.String)] 2019-01-03 13:52:45.961 [main] ERROR com.alibaba.druid.pool.DruidDataSource - testWhileIdle is true, validationQuery not set 2019-01-03 13:52:46.687 [main] WARN c.a.o.canal.parse.inbound.mysql.dbsync.LogEventConvert - --> init table filter : ^...$ 2019-01-03 13:52:46.688 [main] WARN c.a.o.canal.parse.inbound.mysql.dbsync.LogEventConvert - --> init table black filter : 2019-01-03 13:52:46.723 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## the canal server is running now ...... 2019-01-03 13:52:46.925 [destination = example , address = /10.6.10.221:3306 , EventParser] WARN c.a.o.c.p.inbound.mysql.rds.RdsBinlogEventParserProxy - ---> begin to find start position, it will be long time for reset or first position 2019-01-03 13:52:46.926 [destination = example , address = /10.6.10.221:3306 , EventParser] WARN c.a.o.c.p.inbound.mysql.rds.RdsBinlogEventParserProxy - prepare to find start position just show master status 2019-01-03 13:52:48.672 [destination = example , address = /10.6.10.221:3306 , EventParser] WARN c.a.o.c.p.inbound.mysql.rds.RdsBinlogEventParserProxy - ---> find start position successfully, EntryPosition[included=false,journalName=mysql-bin.000004,position=10626,serverId=123454,gtid=,timestamp=1546411889000] cost : 1726ms , the next step is binlog dump 2019-01-03 13:52:49.077 [kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Connection to node -1 could not be established. Broker may not be available. 2019-01-03 13:52:49.077 [kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Connection to node -1 could not be established. Broker may not be available. 2019-01-03 13:52:49.130 [kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Connection to node -1 could not be established. Broker may not be available. 2019-01-03 13:52:49.130 [kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Connection to node -1 could not be established. Broker may not be available. 跟着文档配了一遍出错了,这啥问题啊

canal client 连不上kafka broker,检查kafka server 是否成功启动以及client配置kafka 的地址 @wingerx 下面是具体的配置,就俩文件需要改啊 哪有客户端client配置kafka ################################################# 2 ######### common argument ############# 3 ################################################# 4 canal.id = 1 5 canal.ip =10.6.10.221 6 canal.port = 11111 7 canal.metrics.pull.port = 11112 8 canal.zkServers =10.6.10.221:2181 9 # flush data to zk 10 canal.zookeeper.flush.period = 1000 11 canal.withoutNetty = false 12 # tcp, kafka, RocketMQ 13 canal.serverMode = kafka 14 # flush meta cursor/parse position to file 15 canal.file.data.dir = ${canal.conf.dir} 16 canal.file.flush.period = 1000 17 ## memory store RingBuffer size, should be Math.pow(2,n) 18 canal.instance.memory.buffer.size = 16384 19 ## memory store RingBuffer used memory unit size , default 1kb 20 canal.instance.memory.buffer.memunit = 1024 21 ## meory store gets mode used MEMSIZE or ITEMSIZE 22 canal.instance.memory.batch.mode = MEMSIZE 23 canal.instance.memory.rawEntry = true 24 25 ## detecing config 28 canal.instance.detecting.sql = select 1 29 canal.instance.detecting.interval.time = 3 30 canal.instance.detecting.retry.threshold = 3 31 canal.instance.detecting.heartbeatHaEnable = false 32 33 # support maximum transaction size, more than the size of the transaction will be cut into multiple transactions delivery 34 canal.instance.transaction.size = 1024 35 # mysql fallback connected to new master should fallback times 36 canal.instance.fallbackIntervalInSeconds = 60 37 38 # network config 39 canal.instance.network.receiveBufferSize = 16384 40 canal.instance.network.sendBufferSize = 16384 41 canal.instance.network.soTimeout = 30 42 43 # binlog filter config 44 canal.instance.filter.druid.ddl = true 45 canal.instance.filter.query.dcl = false 46 canal.instance.filter.query.dml = false 47 canal.instance.filter.query.ddl = false 48 canal.instance.filter.table.error = false 49 canal.instance.filter.rows = false 50 canal.instance.filter.transaction.entry = false 51 52 # binlog format/image check 53 canal.instance.binlog.format = ROW,STATEMENT,MIXED 54 canal.instance.binlog.image = FULL,MINIMAL,NOBLOB 55 56 # binlog ddl isolation 57 canal.instance.get.ddl.isolation = false 58 59 # parallel parser config 60 canal.instance.parser.parallel = true 62 #canal.instance.parser.parallelThreadSize = 16 63 ## disruptor ringbuffer size, must be power of 2 64 canal.instance.parser.parallelBufferSize = 256 65 66 # table meta tsdb info 67 canal.instance.tsdb.enable = true 68 canal.instance.tsdb.dir = ${canal.file.data.dir:../conf}/${canal.instance.destination:} 69 canal.instance.tsdb.url = jdbc:h2:${canal.instance.tsdb.dir}/h2;CACHE_SIZE=1000;MODE=MYSQL; 70 canal.instance.tsdb.dbUsername = canal 71 canal.instance.tsdb.dbPassword = canal 72 # dump snapshot interval, default 24 hour 73 canal.instance.tsdb.snapshot.interval = 24 74 # purge snapshot expire , default 360 hour(15 days) 75 canal.instance.tsdb.snapshot.expire = 360 76 77 # aliyun ak/sk , support rds/mq 78 canal.aliyun.accesskey = 79 canal.aliyun.secretkey = 80 81 ################################################# 82 ######### destinations ############# 83 ################################################# 84 canal.destinations = example 85 # conf root dir 86 canal.conf.dir = ../conf 87 # auto scan instance dir add/remove and start/stop instance 88 canal.auto.scan = true 89 canal.auto.scan.interval = 5 90 91 canal.instance.tsdb.spring.xml = classpath:spring/tsdb/h2-tsdb.xml 92 #canal.instance.tsdb.spring.xml = classpath:spring/tsdb/mysql-tsdb.xml 93 94 canal.instance.global.mode = spring 95 canal.instance.global.lazy = false 96 #canal.instance.global.manager.address = 127.0.0.1:1099 97 #canal.instance.global.spring.xml = classpath:spring/memory-instance.xml 98 canal.instance.global.spring.xml = classpath:spring/file-instance.xml 99 #canal.instance.global.spring.xml = classpath:spring/default-instance.xml 100 101 ################################################## 102 ######### MQ ############# 103 ################################################## 104 canal.mq.servers = 10.6.10.221:6667 105 canal.mq.retries = 0 106 canal.mq.batchSize = 16384 107 canal.mq.maxRequestSize = 1048576 108 canal.mq.lingerMs = 1 109 canal.mq.bufferMemory = 33554432 110 canal.mq.canalBatchSize = 50 111 canal.mq.canalGetTimeout = 100 112 canal.mq.flatMessage = true 113 canal.mq.compressionType = none 114 canal.mq.acks = all 下面是 instance.properties配置 #################################################

原提问者GitHub用户liusir11

展开
收起
古拉古拉 2023-05-08 16:35:52 165 发布于北京 分享
分享
版权
举报
1 条回答
写回答
取消 提交回答
  • 第104行 canal.mq.servers = 10.6.10.221:6667 我记得kafka的默认端口为9092,是不是端口写错了

    原回答者GitHub用户shubiao-yao

    2023-05-09 18:14:38 发布于北京 举报
    赞同 评论

    评论

    全部评论 (0)

    登录后可评论