开发者社区> 问答> 正文

kafka producer max.request.size 配置

2018-11-22 10:34:54.486 [pool-12-thread-4] ERROR com.alibaba.otter.canal.kafka.CanalKafkaProducer - org.apache.kafka.common.errors.RecordTooLargeException: The message is 1190153 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration. java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.RecordTooLargeException: The message is 1190153 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration. at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.(KafkaProducer.java:1150) ~[kafka-clients-1.1.1.jar:na] at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:846) ~[kafka-clients-1.1.1.jar:na] at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:784) ~[kafka-clients-1.1.1.jar:na] at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:671) ~[kafka-clients-1.1.1.jar:na] at com.alibaba.otter.canal.kafka.CanalKafkaProducer.send(CanalKafkaProducer.java:129) ~[canal.server-1.1.1.jar:na] at com.alibaba.otter.canal.server.CanalMQStarter.worker(CanalMQStarter.java:138) [canal.server-1.1.1.jar:na] at com.alibaba.otter.canal.server.CanalMQStarter.access$000(CanalMQStarter.java:21) [canal.server-1.1.1.jar:na] at com.alibaba.otter.canal.server.CanalMQStarter$1.run(CanalMQStarter.java:76) [canal.server-1.1.1.jar:na] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_172] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_172] at java.lang.Thread.run(Thread.java:748) [na:1.8.0_172] Caused by: org.apache.kafka.common.errors.RecordTooLargeException: The message is 1190153 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.

原提问者GitHub用户woshijay7788

展开
收起
Java工程师 2023-05-08 17:43:18 441 0
2 条回答
写回答
取消 提交回答
  • 调低batch大小

    原回答者GitHub用户agapple

    2023-05-09 18:40:37
    赞同 展开评论 打赏
  • 随心分享,欢迎友善交流讨论:)

    这个错误通常是因为消息的大小超过了Kafka的最大请求大小限制。可以通过增加Kafka配置文件中的max.request.size参数来解决这个问题,把参数值调整为能够满足消息大小的最小值,比如:

    max.request.size=2000000

    这样可以将Kafka的最大请求大小设置为2MB,满足消息大小的需求。需要注意的是,max.request.size参数的单位是字节,因此需要将需要的大小转换成字节。此外,还需要考虑Kafka服务器的可用资源是否足够支持更大的max.request.size值,以及生产者和消费者的配置是否一致,否则可能会导致消息发送和接收失败。

    2023-05-08 17:44:30
    赞同 展开评论 打赏
问答排行榜
最热
最新

相关电子书

更多
Java Spring Boot开发实战系列课程【第16讲】:Spring Boot 2.0 实战Apache Kafka百万级高并发消息中间件与原理解析 立即下载
MaxCompute技术公开课第四季 之 如何将Kafka数据同步至MaxCompute 立即下载
消息队列kafka介绍 立即下载