kafka伪集群为何__consumer_offsets在各个broker下都有50个?

Gongs 发表于: 2019-07-26   最后更新时间: 2019-07-26 20:00:21   4,013 游览

各位朋友请教下:

目前我在测试服务器上测试了伪集群,在一个宿主机上部署三个broker结点(1,2,3),运行一直正常的,同样的配置我部署到docker上,就出现了异常,现象是

kafka-logs1和kafka-logs3目录下都有 consumer_offsets-0 到 consumer_offsets-49 ,都是50个。
kafka-logs2下面没有任何 __consumer_offsets。

我在宿主机上看kafka-logs1、kafka-logs2、kafka-logs3目录下都有__consumer_offsets,且三个目录下总的数量是50个,没有重复的,不知道为何造成这个问题,目前我使用zookeeper查询结果为:

WatchedEvent state:SyncConnected type:None path:null
ls /brokers/ids[3]

就是只有结点3是活跃的,其他结点都宕掉了,但是也没有发现出错日志。
应用的体现就是重启应用服务连接kafka有时正常消费,有时不消费,不消费时多重启几次服务就会好的。

我想知道为何三个结点下的__consumer_offsets为造成目前的状况,还有为何broker结点都宕掉了,我无从下手解决。恳请各位朋友指点,不胜感激。

我的server.properties配置如下,三个结点都一样的:

broker.id=1
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/mydata/kafka/kafka-logs1
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connection.timeout.ms=6000000
delete.topic.enable=true
group.initial.rebalance.delay.ms=0
message.max.bytes=5000000
replica.fetch.max.bytes=5000000
listeners=PLAINTEXT://172.18.2.8:9091
broker.list=172.18.2.8:9091,172.18.2.8:9092,172.18.2.8:9093
zookeeper.connect=172.18.2.8:2181,172.18.2.8:2182,172.18.2.8:2183
发表于 2019-07-26
添加评论

1、kafka-logs2下面没有任何 __consumer_offsets,说明你broker2节点还没有部署成功的时候,有消息进来,导致的。
2、进到docker中,看看启动日志。

剑枫寒 -> 半兽人 5年前

__consumer_offsets这个topic是kafka自动创建的,默认是50个分区,副本默认是1个?
如果默认是1的话,配置文件里面 offsets.topic.replication.factor是否要改为3才能实现高可用呢

半兽人 -> 剑枫寒 5年前

一旦__consumer_offsets生成后,就不会在变了,你需要通过命令扩展
https://www.orchome.com/454

另外,即使你现在副本是1,但是你的broker2一个都没分到,说明broker2还没好,就有消息进来了。

Gongs -> 半兽人 5年前

不好意思,没有及时回复您。
1,就是说broker2没有成功,borker1和broker3成功了所以有consumer_offsets主题,但是正常的是不是kafka-logs1和kafka-logs3下的consumer_offsets主题数目总和是50个,而不是每个目录下都有50个呢?
2,并不是启动docker容器是启动的kafka,而是进入容器后手动启动的kafka,反正感觉docker里面的kafka跟不用docker有差异,也搞不清晰问题在哪。

Gongs -> 剑枫寒 5年前

你好,
是的,副本是默认的1个,就是比较困惑这50个__consumer_offsets主题是个什么情况,到底是每个kafka-logs都应该有50个还是总共有50个。搞不清楚哪种是正确的状态。我把副本参数也修改下试试看。

半兽人 -> Gongs 5年前
  1. 如果broker1和broker3都有50个,如果都是独立的,不是主从关系,就有问题,你命令打印一下,发出来给我看下。
    2,docker只是容器,包了一层,不会影响kafka的。
Gongs -> 半兽人 5年前

什么命令?

半兽人 -> Gongs 5年前

https://www.orchome.com/454
bin/kafka-topics.sh --describe --zookeeper

Gongs -> 半兽人 5年前

你好,结果是:

./kafka-topics.sh --describe --zookeeper 172.18.2.8:2181

Topic:__consumer_offsets    PartitionCount:50    ReplicationFactor:1    Configs:segment.bytes=104857600,cleanup.policy=compact,compression.type=producer
    Topic: __consumer_offsets    Partition: 0    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 1    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 2    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 3    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 4    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 5    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 6    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 7    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 8    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 9    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 10    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 11    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 12    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 13    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 14    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 15    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 16    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 17    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 18    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 19    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 20    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 21    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 22    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 23    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 24    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 25    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 26    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 27    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 28    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 29    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 30    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 31    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 32    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 33    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 34    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 35    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 36    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 37    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 38    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 39    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 40    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 41    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 42    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 43    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 44    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 45    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 46    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 47    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 48    Leader: 3    Replicas: 3    Isr: 3
    Topic: __consumer_offsets    Partition: 49    Leader: 3    Replicas: 3    Isr: 3
Topic:alarm    PartitionCount:3    ReplicationFactor:1    Configs:
    Topic: alarm    Partition: 0    Leader: 3    Replicas: 3    Isr: 3
    Topic: alarm    Partition: 1    Leader: 3    Replicas: 3    Isr: 3
    Topic: alarm    Partition: 2    Leader: 3    Replicas: 3    Isr: 3
Topic:status    PartitionCount:3    ReplicationFactor:1    Configs:
    Topic: status    Partition: 0    Leader: 3    Replicas: 3    Isr: 3
    Topic: status    Partition: 1    Leader: 3    Replicas: 3    Isr: 3
    Topic: status    Partition: 2    Leader: 3    Replicas: 3    Isr: 3
Topic:order    PartitionCount:1    ReplicationFactor:1    Configs:
    Topic: order    Partition: 0    Leader: 3    Replicas: 3    Isr: 3
Topic:logout    PartitionCount:1    ReplicationFactor:1    Configs:
    Topic: logout    Partition: 0    Leader: 3    Replicas: 3    Isr: 3
半兽人 -> Gongs 5年前

你这所有的topic,都在3的节点上啊?看来只有3是好的。

Gongs -> 半兽人 5年前

确实是只有3节点是好的,通过 ls /brokers/ids 查看也是只有3

你的答案

查看kafka相关的其他问题或提一个您自己的问题