按照这个情况的话,配置producer的batch.size=0即禁用批处理
但是快速生成数据的时候还是会报这个异常,这是什么原因?
kafka_2.13-3.4.0$ bin/kafka-console-producer.sh --broker-list ip:9092 --topic test-3-3 --producer.config ./config/producer.properties
>1
>1
>1
>1
>1
>1[2023-03-30 16:31:15,286] ERROR Error when sending message to topic test-3-3 with key: null, value: 1 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.RecordBatchTooLargeException: The request included message batch larger than the configured segment size on the server.
[2023-03-30 16:31:15,291] ERROR Error when sending message to topic test-3-3 with key: null, value: 1 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.RecordBatchTooLargeException: The request included message batch larger than the configured segment size on the server.
log.segment.bytes默认是1073741824。
RecordBatchTooLargeException: The request included message batch larger than the configured segment size on the server.
接收到的批次消息的大小,超过了你设置存储日志段的大小。
batch.size的默认是16384。
所以:
log.segment.size > batch.size