kafka-broker几乎每周六下午都会触发controller change

发表于: 2022-02-07   最后更新时间: 2022-02-07 16:59:17   964 游览

1、kafka是由5个节点组成的集群,版本为2.12,zk是3节点组成的集群,版本为3.4.16;
2、kafka集群每周六下午均会出现波动,导致部分topic无法正常使用;
3、kafka brokerid分别是1-5,出现问题的是4和5节点。

#controller日志
apple@appledeMacBook-Pro kafka-2 % grep 'Newly added brokers' controller.log* 
controller.log.2022-01-08-13:2022-01-08 13:41:08.228|INFO|[Controller id=2] Newly added brokers: , deleted brokers: 5, bounced brokers: , all live brokers: 1,2,3,4|(kafka.controller.KafkaController)
controller.log.2022-01-08-13:2022-01-08 13:41:09.064|INFO|[Controller id=2] Newly added brokers: 5, deleted brokers: , bounced brokers: , all live brokers: 1,2,3,4,5|(kafka.controller.KafkaController)
controller.log.2022-01-15-13:2022-01-15 13:49:17.824|INFO|[Controller id=2] Newly added brokers: , deleted brokers: 4, bounced brokers: , all live brokers: 1,2,3,5|(kafka.controller.KafkaController)
controller.log.2022-01-15-13:2022-01-15 13:49:25.255|INFO|[Controller id=2] Newly added brokers: 4, deleted brokers: , bounced brokers: , all live brokers: 1,2,3,4,5|(kafka.controller.KafkaController)
controller.log.2022-01-22-13:2022-01-22 13:42:11.390|INFO|[Controller id=2] Newly added brokers: 4, deleted brokers: , bounced brokers: , all live brokers: 1,2,3,4,5|(kafka.controller.KafkaController)
controller.log.2022-01-29-11:2022-01-29 11:45:23.744|INFO|[Controller id=2] Newly added brokers: , deleted brokers: 5, bounced brokers: , all live brokers: 1,2,3,4|(kafka.controller.KafkaController)
controller.log.2022-01-29-11:2022-01-29 11:45:24.751|INFO|[Controller id=2] Newly added brokers: 5, deleted brokers: , bounced brokers: , all live brokers: 1,2,3,4,5|(kafka.controller.KafkaController)
controller.log.2022-01-29-12:2022-01-29 12:09:23.720|INFO|[Controller id=2] Newly added brokers: , deleted brokers: 4, bounced brokers: , all live brokers: 1,2,3,5|(kafka.controller.KafkaController)
controller.log.2022-01-29-12:2022-01-29 12:09:24.248|INFO|[Controller id=2] Newly added brokers: 4, deleted brokers: , bounced brokers: , all live brokers: 1,2,3,4,5|(kafka.controller.KafkaController)
controller.log.2022-01-29-12:2022-01-29 12:44:31.720|INFO|[Controller id=2] Newly added brokers: , deleted brokers: 4, bounced brokers: , all live brokers: 1,2,3,5|(kafka.controller.KafkaController)
controller.log.2022-01-29-12:2022-01-29 12:45:15.981|INFO|[Controller id=2] Newly added brokers: 4, deleted brokers: , bounced brokers: , all live brokers: 1,2,3,4,5|(kafka.controller.KafkaController)
controller.log.2022-01-29-13:2022-01-29 13:21:55.719|INFO|[Controller id=2] Newly added brokers: , deleted brokers: 5, bounced brokers: , all live brokers: 1,2,3,4|(kafka.controller.KafkaController)
controller.log.2022-01-29-13:2022-01-29 13:21:57.086|INFO|[Controller id=2] Newly added brokers: 5, deleted brokers: , bounced brokers: , all live brokers: 1,2,3,4,5|(kafka.controller.KafkaController)
controller.log.2022-01-29-16:2022-01-29 16:07:51.720|INFO|[Controller id=2] Newly added brokers: , deleted brokers: 4, bounced brokers: , all live brokers: 1,2,3,5|(kafka.controller.KafkaController)
controller.log.2022-01-29-16:2022-01-29 16:07:53.227|INFO|[Controller id=2] Newly added brokers: 4, deleted brokers: , bounced brokers: , all live brokers: 1,2,3,4,5|(kafka.controller.KafkaController)
#controller.log中的日志
2022-01-29 12:09:24.175|DEBUG|[Controller id=4] Resigning|(kafka.controller.KafkaController)
2022-01-29 12:09:24.176|DEBUG|[Controller id=4] Unregister BrokerModifications handler for Set()|(kafka.controller.KafkaController)
2022-01-29 12:09:24.185|INFO|[PartitionStateMachine controllerId=4] Stopped partition state machine|(kafka.controller.PartitionStateMachine)
2022-01-29 12:09:24.185|INFO|[ReplicaStateMachine controllerId=4] Stopped replica state machine|(kafka.controller.ReplicaStateMachine)
2022-01-29 12:09:24.185|INFO|[Controller id=4] Resigned|(kafka.controller.KafkaController)
2022-01-29 12:09:24.262|DEBUG|[Controller id=4] Broker 2 has been elected as the controller, so stopping the election process.|(kafka.controller.KafkaController)
2022-01-29 12:45:15.953|DEBUG|[Controller id=4] Resigning|(kafka.controller.KafkaController)
2022-01-29 12:45:15.953|DEBUG|[Controller id=4] Unregister BrokerModifications handler for Set()|(kafka.controller.KafkaController)
2022-01-29 12:45:15.953|INFO|[PartitionStateMachine controllerId=4] Stopped partition state machine|(kafka.controller.PartitionStateMachine)
2022-01-29 12:45:15.953|INFO|[ReplicaStateMachine controllerId=4] Stopped replica state machine|(kafka.controller.ReplicaStateMachine)
2022-01-29 12:45:15.953|INFO|[Controller id=4] Resigned|(kafka.controller.KafkaController)
2022-01-29 12:45:15.980|DEBUG|[Controller id=4] Broker 2 has been elected as the controller, so stopping the election process.|(kafka.controller.KafkaController)
#kafkaServer.out提炼的日志
2022-01-15 13:50:22.871|WARN|[LeaderEpochCache PAAS_DOEM_PROD_INCIDENTASSIGNBUCKET_P-7] New epoch entry EpochEntry(epoch=4, startOffset=0) caused truncation of conflicting entries ListBuffer(EpochEntry(epoch=2, startOffset=0)). Cache now contains 1 entries.|(kafka.server.epoch.LeaderEpochFileCache)
2022-01-22 13:42:12.795|WARN|[LeaderEpochCache PAAS_DOEM_PROD_INCIDENTASSIGNBUCKET_P-7] New epoch entry EpochEntry(epoch=6, startOffset=0) caused truncation of conflicting entries ListBuffer(EpochEntry(epoch=4, startOffset=0)). Cache now contains 1 entries.|(kafka.server.epoch.LeaderEpochFileCache)
2022-01-29 12:12:26.212|WARN|[LeaderEpochCache PAAS_DOEM_PROD_INCIDENTASSIGNBUCKET_P-7] New epoch entry EpochEntry(epoch=8, startOffset=0) caused truncation of conflicting entries ListBuffer(EpochEntry(epoch=6, startOffset=0)). Cache now contains 1 entries.|(kafka.server.epoch.LeaderEpochFileCache)
2022-01-29 12:47:26.335|WARN|[LeaderEpochCache PAAS_DOEM_PROD_INCIDENTASSIGNBUCKET_P-7] New epoch entry EpochEntry(epoch=10, startOffset=0) caused truncation of conflicting entries ListBuffer(EpochEntry(epoch=8, startOffset=0)). Cache now contains 1 entries.|(kafka.server.epoch.LeaderEpochFileCache)
#state-change.log中提炼的日志
2022-01-29 12:09:24.524|TRACE|[Broker id=4] Cached leader info PartitionState(controllerEpoch=8, leader=1, leaderEpoch=7, isr=[2, 1], zkVersion=11, replicas=[4, 1, 2], offlineReplicas=[]) for partition PAAS_DOEM_PROD_INCIDENTASSIGNBUCKET_P-7 in response to UpdateMetadata request sent by controller 2 epoch 8 with correlation id 0|(state.change.logger)
2022-01-29 12:09:24.563|TRACE|[Broker id=4] Received LeaderAndIsr request PartitionState(controllerEpoch=8, leader=1, leaderEpoch=7, isr=2,1, zkVersion=11, replicas=4,1,2, isNew=false) correlation id 1 from controller 2 epoch 8 for partition PAAS_DOEM_PROD_INCIDENTASSIGNBUCKET_P-7|(state.change.logger)
2022-01-29 12:09:24.569|TRACE|[Broker id=4] Handling LeaderAndIsr request correlationId 1 from controller 2 epoch 8 starting the become-follower transition for partition PAAS_DOEM_PROD_INCIDENTASSIGNBUCKET_P-7 with leader 1|(state.change.logger)
2022-01-29 12:09:26.720|TRACE|[Broker id=4] Cached leader info PartitionState(controllerEpoch=8, leader=1, leaderEpoch=7, isr=[2, 1], zkVersion=11, replicas=[4, 1, 2], offlineReplicas=[]) for partition PAAS_DOEM_PROD_INCIDENTASSIGNBUCKET_P-7 in response to UpdateMetadata request sent by controller 2 epoch 8 with correlation id 2|(state.change.logger)
2022-01-29 12:09:32.256|TRACE|[Broker id=4] Cached leader info PartitionState(controllerEpoch=8, leader=1, leaderEpoch=7, isr=[2, 1, 4], zkVersion=12, replicas=[4, 1, 2], offlineReplicas=[]) for partition PAAS_DOEM_PROD_INCIDENTASSIGNBUCKET_P-7 in response to UpdateMetadata request sent by controller 2 epoch 8 with correlation id 9|(state.change.logger)
2022-01-29 12:12:26.247|TRACE|[Broker id=4] Cached leader info PartitionState(controllerEpoch=8, leader=4, leaderEpoch=8, isr=[2, 1, 4], zkVersion=13, replicas=[4, 1, 2], offlineReplicas=[]) for partition PAAS_DOEM_PROD_INCIDENTASSIGNBUCKET_P-7 in response to UpdateMetadata request sent by controller 2 epoch 8 with correlation id 14|(state.change.logger)
2022-01-29 12:45:16.010|TRACE|[Broker id=4] Cached leader info PartitionState(controllerEpoch=8, leader=1, leaderEpoch=9, isr=[2, 1], zkVersion=14, replicas=[4, 1, 2], offlineReplicas=[]) for partition PAAS_DOEM_PROD_INCIDENTASSIGNBUCKET_P-7 in response to UpdateMetadata request sent by controller 2 epoch 8 with correlation id 0|(state.change.logger)
2022-01-29 12:45:39.671|TRACE|[Broker id=4] Cached leader info PartitionState(controllerEpoch=8, leader=1, leaderEpoch=9, isr=[2, 1, 4], zkVersion=15, replicas=[4, 1, 2], offlineReplicas=[]) for partition PAAS_DOEM_PROD_INCIDENTASSIGNBUCKET_P-7 in response to UpdateMetadata request sent by controller 2 epoch 8 with correlation id 8|(state.change.logger)
2022-01-29 12:47:26.246|TRACE|[Broker id=4] Received LeaderAndIsr request PartitionState(controllerEpoch=8, leader=4, leaderEpoch=10, isr=2,1,4, zkVersion=16, replicas=4,1,2, isNew=false) correlation id 9 from controller 2 epoch 8 for partition PAAS_DOEM_PROD_INCIDENTASSIGNBUCKET_P-7|(state.change.logger)

判断:
1、可以看到,从kafkaServer.out中日志挑选的PAAS_DOEM_PROD_INCIDENTASSIGNBUCKET_P-7 topic,大概率在执行leader重选举后的元数据加载或者同步;
2、state-change中的这个topic,在对应时间点感觉是因为broker4的问题(故障或者超过replica.lag.time.max.ms)剔除了isr列表,导致对应的topic 进行了leader重选举;

疑问:
1、目前没有解决这个每周六触发报错的问题,怀疑是gc的问题导致,但是通过查看gc日志,没有发现一次Full GC和较长时间的MajorGC,对应时间点内的gc正常;
2、通过查看监控,发现对应时间点内的服务器运行正常,无业务压力激增的情况;
3、有没有什么思路排查对应上述报错?

  • 看你的日志,还有个空broker,deleted brokers: ,是不是之前的脏数据。另外,你编辑一下问题,补充一下kafka集群信息,使用describe命令。半兽人 2年前 回复
    @半兽人 kafka brokerid分别是1-5,出现问题的是4和5节点。当前无法连接内网获取更多信息,理论上是不存在脏数据的情况。 回复
    @偉 那等你能登录内网吧,describe集群情况更直观一点,日志info和warn,理论上是可以忽略的。半兽人 回复
添加评论
你的答案

查看kafka相关的其他问题或提一个您自己的问题