2018-11-22 10:34:54.486 [pool-12-thread-4] ERROR com.alibaba.otter.canal.kafka.CanalKafkaProducer - org.apache.kafka.common.errors.RecordTooLargeException: The message is 1190153 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration. java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.RecordTooLargeException: The message is 1190153 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration. at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.(KafkaProducer.java:1150) ~[kafka-clients-1.1.1.jar:na] at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:846) ~[kafka-clients-1.1.1.jar:na] at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:784) ~[kafka-clients-1.1.1.jar:na] at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:671) ~[kafka-clients-1.1.1.jar:na] at com.alibaba.otter.canal.kafka.CanalKafkaProducer.send(CanalKafkaProducer.java:129) ~[canal.server-1.1.1.jar:na] at com.alibaba.otter.canal.server.CanalMQStarter.worker(CanalMQStarter.java:138) [canal.server-1.1.1.jar:na] at com.alibaba.otter.canal.server.CanalMQStarter.access$000(CanalMQStarter.java:21) [canal.server-1.1.1.jar:na] at com.alibaba.otter.canal.server.CanalMQStarter$1.run(CanalMQStarter.java:76) [canal.server-1.1.1.jar:na] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_172] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_172] at java.lang.Thread.run(Thread.java:748) [na:1.8.0_172] Caused by: org.apache.kafka.common.errors.RecordTooLargeException: The message is 1190153 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.
原提问者GitHub用户woshijay7788
这个错误通常是因为消息的大小超过了Kafka的最大请求大小限制。可以通过增加Kafka配置文件中的max.request.size参数来解决这个问题,把参数值调整为能够满足消息大小的最小值,比如:
max.request.size=2000000
这样可以将Kafka的最大请求大小设置为2MB,满足消息大小的需求。需要注意的是,max.request.size参数的单位是字节,因此需要将需要的大小转换成字节。此外,还需要考虑Kafka服务器的可用资源是否足够支持更大的max.request.size值,以及生产者和消费者的配置是否一致,否则可能会导致消息发送和接收失败。
版权声明:本文内容由阿里云实名注册用户自发贡献,版权归原作者所有,阿里云开发者社区不拥有其著作权,亦不承担相应法律责任。具体规则请查看《阿里云开发者社区用户服务协议》和《阿里云开发者社区知识产权保护指引》。如果您发现本社区中有涉嫌抄袭的内容,填写侵权投诉表单进行举报,一经查实,本社区将立刻删除涉嫌侵权内容。