kafka 生产消息报错 RecordTooLargeException

一如乞人不需要形象 发表于: 2021-09-14   最后更新时间: 2021-10-28 10:48:47   2,944 游览

kafka connector 生产消息报错, 版本2.5

server.properties里

replica.fetch.max.bytes=104857600
message.max.bytes=104857600

connect-distributed.properties里

producer.max.request.size=104857600
java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.RecordTooLargeException: The message is 3175237 bytes when serialized which is larger than 1048576, which is the value of the max.request.size configuration.
    at org.apache.kafka.connect.storage.KafkaOffsetBackingStore$SetCallbackFuture.get(KafkaOffsetBackingStore.java:228)
    at org.apache.kafka.connect.storage.KafkaOffsetBackingStore$SetCallbackFuture.get(KafkaOffsetBackingStore.java:161)
    at org.apache.kafka.connect.runtime.WorkerSourceTask.commitOffsets(WorkerSourceTask.java:498)
    at org.apache.kafka.connect.runtime.SourceTaskOffsetCommitter.commit(SourceTaskOffsetCommitter.java:113)
    at org.apache.kafka.connect.runtime.SourceTaskOffsetCommitter.access$000(SourceTaskOffsetCommitter.java:47)
    at org.apache.kafka.connect.runtime.SourceTaskOffsetCommitter$1.run(SourceTaskOffsetCommitter.java:86)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
发表于 2021-09-14

错误提示:

The message is 3175237 bytes when serialized which is larger than 1048576, which is the value of the max.request.size configuration.

你需要修改的是 max.request.size

请求的最大大小(以字节为单位)。此设置将限制生产者的单个请求中发送的消息批次数,以避免发送过大的请求。这也是最大消息批量大小的上限。请注意,服务器拥有自己的批量大小,可能与此不同。
默认:1048576

参考来自:Kafka Producer配置

我已经设置了呀,并且在connector.log里看到了如下log:

ProducerConfig values:
    acks = -1
    batch.size = 16384
    max.block.ms = 9223372036854775807
    max.in.flight.requests.per.connection = 1
    max.request.size = 104857600

你去kafka节点也改一下,只剩下这种情况了。

请注意,服务器拥有自己的批量大小,可能与此不同。

message.max.bytes
参考:Kafka Broker配置

也改了啊,三个值都设置成了104857600

只有这2种情况,还有其他的错误吗?

没有了,明明默认的配置也改了,log里显示producer的值是改之后的,就是不生效,不知道为什么。

我在启动connectorrestful api里带上

"producer.override.max.request.size":"104857600"

才可以的,

connect-distributed.properties中配置:

producer.max.request.size=10485760

不带override

你的答案

查看kafka相关的其他问题或提一个您自己的问题