kafka客户端消费别人的kafka报错:
Exception in thread "Thread-36" org.apache.kafka.common.errors.RecordTooLargeException: There are some messages at [Partition=Offset]: {xx-10=5210840} whose size is larger than the fetch size 2097152000 and hence cannot be ever returned. Increase the fetch size, or decrease the maximum message size the broker will allow.
不是一开始就报错,消费了大概150W之后报错,一条消息大概100多个字段
代码参数设置:
props.put("sasl.mechanism", "PLAIN");
props.put("enable.auto.commit", "true");
props.put("auto.commit.interval.ms", "5000");
props.put("session.timeout.ms", "30000");
props.put("auto.offset.reset", "earliest");
props.put("max.partition.fetch.bytes", "2097152000");
props.put("max.poll.records", "100");
// props.put("fetch.max.bytes", "2097152000");
// props.put("fetch.message.max.bytes", "2097152000");
设置过这两个参数没有生效,无效的参数
WARN 10-08 16:29:19,186 - The configuration **fetch.message.max.bytes** = 2097152000 was supplied but isn't a known config.
WARN 10-08 16:29:19,187 - The configuration **fetch.max.bytes** = 2097152000 was supplied but isn't a known config.
INFO 10-08 16:29:19,189 - Kafka version : 0.10.0.0
我用命令行指定上面报错的分区和offset,--partition 10 --offset 5210840 是可以消费的;
服务器的message.max.bytes=20000000.
帮忙指点下这个是什么原因导致的?
kafka 0.10.0.0之后就没有
fetch.max.bytes
和fetch.message.max.bytes
了。新的是:
max.partition.fetch.bytes
,说明如下:继续加大:
max.partition.fetch.bytes
我现在把max.partition.fetch.bytes加大10倍,把max.poll.records降到10,在测试运行,就是速度慢的一比
你单条消息太大了。
新手回答,可能有误,共同探讨一下。
试试这个参数呢 fetch.max.bytes
试过了,不支持这个参数
你的答案