kafka broker的配置参数有一个log.segment.bytes指的是单个日志文件存储的消息容量的最大值,该值最小是14bytes,测试时将该值设成了14,按道理小于14bytes的消息应该仍能写入日志文件,但是输入了一个跟小的值,还是报了异常,这是为什么?
测试日志如下
kafka_2.13-3.4.0$ bin/kafka-console-producer.sh --broker-list ip:port --topic test-segment
1
[2023-03-25 15:11:47,267] ERROR Error when sending message to topic test-segment with key: null, value: 1 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.RecordBatchTooLargeException: The request included message batch larger than the configured segment size on the server.
log.segment.bytes默认是1073741824。
接收到的批次消息的大小,超过了你设置存储日志段的大小。
batch.size的默认是16384。
所以:
log.segment.size > batch.size
参考:Kafka Producer配置
按照这个情况的话,配置producer的batch.size=0即禁用批处理
但是快速生成数据的时候还是会报这个异常,这是什么原因?
kafka_2.13-3.4.0$ bin/kafka-console-producer.sh --broker-list ip:9092 --topic test-3-3 --producer.config ./config/producer.properties >1 >1 >1 >1 >1 >1[2023-03-30 16:31:15,286] ERROR Error when sending message to topic test-3-3 with key: null, value: 1 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback) org.apache.kafka.common.errors.RecordBatchTooLargeException: The request included message batch larger than the configured segment size on the server. [2023-03-30 16:31:15,291] ERROR Error when sending message to topic test-3-3 with key: null, value: 1 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback) org.apache.kafka.common.errors.RecordBatchTooLargeException: The request included message batch larger than the configured segment size on the server.
你的答案