为什么coordinator有member的概念?

開開新新 发表于: 2019-01-16   最后更新时间: 2019-05-02 01:22:42   2,037 游览

我使用kafka做ELK的中间代理,不知道怎么回事,logstash不采集日志了,我在logstash中看到以下的日志信息,感觉好像是kafka的那端已经有连接占用了? 不理解为什么coordinator会有member的概念? 那默认是限制多少个member连接?怎么更改这个配置?

下面是logstash的日志

[2019-01-15T14:16:56,299][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-3, groupId=logstash] Discovered coordinator 192.168.23.170:9092 (id: 2147483647 rack: null)
[2019-01-15T14:16:56,314][ERROR][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-3, groupId=logstash] **Offset commit failed on partition topic-0 at offset 0: The coordinator is not aware of this member.**
[2019-01-15T14:16:56,314][WARN ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-3, groupId=logstash] Asynchronous auto-commit of offsets {topic-0=OffsetAndMetadata{offset=0, metadata=''}} failed: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.
[2019-01-15T14:16:56,314][WARN ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-3, groupId=logstash] Synchronous auto-commit of offsets {topic-0=OffsetAndMetadata{offset=0, metadata=''}} failed: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.
发表于 2019-01-16
添加评论

重复发帖 麻烦删除一下
另一个问题在 https://www.orchome.com/1405

你的答案

查看kafka相关的其他问题或提一个您自己的问题