The coordinator is not aware of this member.

開開新新 发表于: 2019-01-16   最后更新时间: 2021-09-24 14:16:17   5,203 游览

我使用kafka做ELK的中间代理,不知道怎么回事,logstash不采集日志了,我在logstash中看到以下的日志信息,感觉好像是kafka的那端已经有连接占用了? 不理解为什么coordinator会有member的概念? 那默认是限制多少个member连接?

怎么更改这个配置?

下面是logstash的日志

[2019-01-15T14:16:56,299][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-3, groupId=logstash] Discovered coordinator 192.168.23.170:9092 (id: 2147483647 rack: null)
[2019-01-15T14:16:56,314][ERROR][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-3, groupId=logstash] Offset commit failed on partition topic-0 at offset 0: The coordinator is not aware of this member.
[2019-01-15T14:16:56,314][WARN ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-3, groupId=logstash] Asynchronous auto-commit of offsets {topic-0=OffsetAndMetadata{offset=0, metadata=''}} failed: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.
[2019-01-15T14:16:56,314][WARN ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-3, groupId=logstash] Synchronous auto-commit of offsets {topic-0=OffsetAndMetadata{offset=0, metadata=''}} failed: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.

发表于 2019-01-16
添加评论
你的答案

查看kafka相关的其他问题或提一个您自己的问题