开发者社区> 问答> 正文

canal-adapter v1.1.3-alpha-3 版本无法同步数据到es

环境信息

canal version mysql version

问题描述

canal-deployer docker 容器运行在192.168.10.235主机上 mysql在192.168.10.170主机上

启动配置

./run.sh -e canal.instance.master.address=192.168.10.170:3306
-e canal.destinations=test
-e canal.instance.dbUsername=canal
-e canal.instance.dbPassword=canal
-e canal.instance.connectionCharset=UTF-8
-e canal.instance.tsdb.enable=true
-e canal.instance.gtidon=false
-e canal.instance.filter.regex=test

进入容器目录后可以看到test目录存在

canal-adapter配置 server: port: 8081 spring: jackson: date-format: yyyy-MM-dd HH:mm:ss time-zone: GMT+8 default-property-inclusion: non_null

canal.conf: mode: tcp # kafka rocketMQ canalServerHost: 192.168.10.235:11111

flatMessage: true

batchSize: 500 syncBatchSize: 1000 retries: 0 timeout: accessKey: secretKey: srcDataSources: defaultDS: url: jdbc:mysql://192.168.10.170:3306/test?useUnicode=true username: canal password: canal canalAdapters: - instance: test # canal instance Name or mq topic name groups: - groupId: g1 outerAdapters: - name: es hosts: 192.168.10.229:9300 properties: cluster.name: elasticsearch

dataSourceKey: defaultDS destination: test groupId: g1 esMapping: _index: mytest _type: _doc _id: _id upsert: true

pk: id

sql: "SELECT a.rowid as _id,a.fpusername,a.funitprice,a.faddressdetail FROM t_groupon_childorder a" etlCondition: "where a.c_time>='{0}'" commitBatch: 3000

启动adapter后 2019-03-28 16:54:27.319 [Thread-4] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterWorker - =============> Start to subscribe de== 2019-03-28 16:54:27.374 [Thread-4] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterWorker - =============> Subscribe destination=

在es中查看 没有看到索引进入

进入canal-deployer目录下修改meta.dat的binlog位置,依然没有索引同步过来

我换成使用二进制程序 adapter 日志开到debug等级,然后也看出来为什么adapter不同步数据到es中。

2019-03-29 11:35:03.595 [main] INFO org.elasticsearch.plugins.PluginsService - no modules loaded 2019-03-29 11:35:03.596 [main] INFO org.elasticsearch.plugins.PluginsService - loaded plugin [org.elasticsearch.index.reindex.ReindexPlugin] 2019-03-29 11:35:03.596 [main] INFO org.elasticsearch.plugins.PluginsService - loaded plugin [org.elasticsearch.join.ParentJoinPlugin] 2019-03-29 11:35:03.596 [main] INFO org.elasticsearch.plugins.PluginsService - loaded plugin [org.elasticsearch.percolator.PercolatorPlugin] 2019-03-29 11:35:03.596 [main] INFO org.elasticsearch.plugins.PluginsService - loaded plugin [org.elasticsearch.script.mustache.MustachePlugin] 2019-03-29 11:35:03.596 [main] INFO org.elasticsearch.plugins.PluginsService - loaded plugin [org.elasticsearch.transport.Netty4Plugin] 2019-03-29 11:35:03.636 [main] DEBUG org.elasticsearch.threadpool.ThreadPool - created thread pool: name [force_merge], size [1], queue size [unbounded] 2019-03-29 11:35:03.637 [main] DEBUG org.elasticsearch.threadpool.ThreadPool - created thread pool: name [fetch_shard_started], core [1], max [8], keep alive [5m] 2019-03-29 11:35:03.637 [main] DEBUG org.elasticsearch.threadpool.ThreadPool - created thread pool: name [listener], size [2], queue size [unbounded] 2019-03-29 11:35:03.640 [main] DEBUG org.elasticsearch.threadpool.ThreadPool - created thread pool: name [index], size [4], queue size [200] 2019-03-29 11:35:03.640 [main] DEBUG org.elasticsearch.threadpool.ThreadPool - created thread pool: name [refresh], core [1], max [2], keep alive [5m] 2019-03-29 11:35:03.640 [main] DEBUG org.elasticsearch.threadpool.ThreadPool - created thread pool: name [generic], core [4], max [128], keep alive [30s] 2019-03-29 11:35:03.640 [main] DEBUG org.elasticsearch.threadpool.ThreadPool - created thread pool: name [warmer], core [1], max [2], keep alive [5m] 2019-03-29 11:35:03.642 [main] DEBUG o.e.c.util.concurrent.QueueResizingEsThreadPoolExecutor - thread pool [client/search] will adjust queue by [50] when determining automatic queue size 2019-03-29 11:35:03.642 [main] DEBUG org.elasticsearch.threadpool.ThreadPool - created thread pool: name [search], size [7], queue size [1k] 2019-03-29 11:35:03.642 [main] DEBUG org.elasticsearch.threadpool.ThreadPool - created thread pool: name [flush], core [1], max [2], keep alive [5m] 2019-03-29 11:35:03.642 [main] DEBUG org.elasticsearch.threadpool.ThreadPool - created thread pool: name [fetch_shard_store], core [1], max [8], keep alive [5m] 2019-03-29 11:35:03.643 [main] DEBUG org.elasticsearch.threadpool.ThreadPool - created thread pool: name [management], core [1], max [5], keep alive [5m] 2019-03-29 11:35:03.643 [main] DEBUG org.elasticsearch.threadpool.ThreadPool - created thread pool: name [get], size [4], queue size [1k] 2019-03-29 11:35:03.643 [main] DEBUG org.elasticsearch.threadpool.ThreadPool - created thread pool: name [bulk], size [4], queue size [200] 2019-03-29 11:35:03.643 [main] DEBUG org.elasticsearch.threadpool.ThreadPool - created thread pool: name [snapshot], core [1], max [2], keep alive [5m] 2019-03-29 11:35:03.676 [main] DEBUG io.netty.util.internal.PlatformDependent0 - -Dio.netty.noUnsafe: true 2019-03-29 11:35:03.676 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe: unavailable (io.netty.noUnsafe) 2019-03-29 11:35:03.677 [main] DEBUG io.netty.util.internal.PlatformDependent0 - Java version: 8 2019-03-29 11:35:03.677 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.DirectByteBuffer.(long, int): unavailable 2019-03-29 11:35:03.677 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.tmpdir: /tmp (java.io.tmpdir) 2019-03-29 11:35:03.677 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.bitMode: 64 (sun.arch.data.model) 2019-03-29 11:35:03.678 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.noPreferDirect: true 2019-03-29 11:35:03.678 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.maxDirectMemory: -1 bytes 2019-03-29 11:35:03.678 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.uninitializedArrayAllocationThreshold: -1 2019-03-29 11:35:04.392 [main] DEBUG o.e.client.transport.TransportClientNodesService - node_sampler_interval[5s] 2019-03-29 11:35:04.396 [main-SendThread(192.168.10.245:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x1000008e1b00012 after 0ms 2019-03-29 11:35:04.412 [main] DEBUG io.netty.channel.MultithreadEventLoopGroup - -Dio.netty.eventLoopThreads: 8 2019-03-29 11:35:04.429 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.noKeySetOptimization: true 2019-03-29 11:35:04.430 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.selectorAutoRebuildThreshold: 512 2019-03-29 11:35:04.435 [main] DEBUG io.netty.util.internal.PlatformDependent - org.jctools-core.MpscChunkedArrayQueue: unavailable 2019-03-29 11:35:04.476 [main] DEBUG o.e.client.transport.TransportClientNodesService - adding address [{#transport#-1}{PhQlyOjiT0i-JfUyQkcjtg}{192.168.10.229}{192.168.10.229:9300}] 2019-03-29 11:35:04.486 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.processId: 19267 (auto-detected) 2019-03-29 11:35:04.488 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv4Stack: true 2019-03-29 11:35:04.488 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv6Addresses: false 2019-03-29 11:35:04.489 [main] DEBUG io.netty.util.NetUtil - Loopback interface: lo (lo, 127.0.0.1) 2019-03-29 11:35:04.490 [main] DEBUG io.netty.util.NetUtil - /proc/sys/net/core/somaxconn: 128 2019-03-29 11:35:04.491 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.machineId: 00:0c:29:ff:fe:71:9d:8f (auto-detected) 2019-03-29 11:35:04.495 [main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024 2019-03-29 11:35:04.495 [main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096 2019-03-29 11:35:04.502 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.level: simple 2019-03-29 11:35:04.502 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.maxRecords: 4 2019-03-29 11:35:04.502 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.maxSampledRecords: 40 2019-03-29 11:35:04.519 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numHeapArenas: 8 2019-03-29 11:35:04.519 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numDirectArenas: 8 2019-03-29 11:35:04.519 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.pageSize: 8192 2019-03-29 11:35:04.519 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxOrder: 11 2019-03-29 11:35:04.519 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.chunkSize: 16777216 2019-03-29 11:35:04.519 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.tinyCacheSize: 512 2019-03-29 11:35:04.519 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.smallCacheSize: 256 2019-03-29 11:35:04.520 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.normalCacheSize: 64 2019-03-29 11:35:04.520 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedBufferCapacity: 32768 2019-03-29 11:35:04.520 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimInterval: 8192 2019-03-29 11:35:04.520 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.useCacheForAllThreads: true 2019-03-29 11:35:04.527 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: pooled 2019-03-29 11:35:04.527 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 65536 2019-03-29 11:35:04.527 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.maxThreadLocalCharBufferSize: 16384 2019-03-29 11:35:04.580 [main] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.bytebuf.checkAccessible: true 2019-03-29 11:35:04.581 [main] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector@589fb74d 2019-03-29 11:35:04.586 [main] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxCapacityPerThread: disabled 2019-03-29 11:35:04.587 [main] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxSharedCapacityFactor: disabled 2019-03-29 11:35:04.587 [main] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.linkCapacity: disabled 2019-03-29 11:35:04.587 [main] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.ratio: disabled 2019-03-29 11:35:04.676 [main] DEBUG org.elasticsearch.transport.netty4.Netty4Transport - connected to node [{Zy2c00L}{Zy2c00LMTZqZSUcp8QBPJw}{MIK33XfqT5S8_KkOS-YUWg}{192.168.10.229}{192.168.10.229:9300}{ml.machine_memory=6088138752, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}] 2019-03-29 11:35:04.686 [main] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterLoader - Load canal adapter: es succeed 2019-03-29 11:35:04.691 [main] DEBUG o.s.beans.factory.support.DefaultListableBeanFactory - Returning cached instance of singleton bean 'syncSwitch' 2019-03-29 11:35:04.711 [main] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterLoader - Start adapter for canal-client mq topic: wenzhou-g1 succeed 2019-03-29 11:35:04.711 [main] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterService - ## the canal client adapters are running now ...... 2019-03-29 11:35:04.712 [main] DEBUG o.s.beans.factory.support.DefaultListableBeanFactory - Finished creating instance of bean 'scopedTarget.canalAdapterService' 2019-03-29 11:35:04.726 [Thread-4] INFO c.a.o.c.adapter.launcher.loader.CanalAdapterKafkaWorker - =============> Start to connect topic: wenzhou <============= 2019-03-29 11:35:04.733 [main] DEBUG o.s.b.a.logging.ConditionEvaluationReportLoggingListener -

原提问者GitHub用户wajika

展开
收起
古拉古拉 2023-05-08 13:38:41 112 0
1 条回答
写回答
取消 提交回答
  • 先使用adapter logger方式测试一下binlog是否能正常在产生和消费

    原回答者GitHub用户agapple

    2023-05-09 17:42:45
    赞同 展开评论 打赏
问答排行榜
最热
最新

相关电子书

更多
低代码开发师(初级)实战教程 立即下载
冬季实战营第三期:MySQL数据库进阶实战 立即下载
阿里巴巴DevOps 最佳实践手册 立即下载