kafka集群一个节点故障导致数据丢失

发表于: 2019-07-23   最后更新时间: 2019-07-23 11:42:05   3,675 游览

kafka版本为kafka_2.12-0.11.0.2,
集群为三节点,配置如下

broker.id=1
listeners=PLAINTEXT://192.168.1.2:16092
advertised.listeners=PLAINTEXT://192.168.1.2::16092
num.network.threads=10
num.io.threads=20
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafka-logs
num.partitions=40
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=2
transaction.state.log.replication.factor=2
transaction.state.log.min.isr=2
log.retention.hours=12
message.max.byte=5242880
default.replication.factor=2
replica.fetch.max.bytes=5242880
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=zk01:2181,node02.zk02:2181,zk03:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=3000

当停止其中一个节点后,发现数据有部分丢失,原先生产端发送了50条数据,大概有5条左右数据丢失。

发表于 2019-07-23
添加评论

你topic的副本数,设置的几个?用命令查看一下,贴上来看看。

你好,topic副本数为2,具体如下

./kafka-topics.sh --zookeeper localhost:11001 --describe --topic dipsms
Topic:dipsms    PartitionCount:40    ReplicationFactor:2    Configs:min.insync.replicas=2
    Topic: dipsms    Partition: 0    Leader: 3    Replicas: 3,1    Isr: 1,3
    Topic: dipsms    Partition: 1    Leader: 1    Replicas: 1,2    Isr: 2,1
    Topic: dipsms    Partition: 2    Leader: 2    Replicas: 2,3    Isr: 2,3
    Topic: dipsms    Partition: 3    Leader: 3    Replicas: 3,2    Isr: 2,3
    Topic: dipsms    Partition: 4    Leader: 1    Replicas: 1,3    Isr: 1,3
    Topic: dipsms    Partition: 5    Leader: 2    Replicas: 2,1    Isr: 2,1
    Topic: dipsms    Partition: 6    Leader: 3    Replicas: 3,1    Isr: 1,3
    Topic: dipsms    Partition: 7    Leader: 1    Replicas: 1,2    Isr: 1,2
    Topic: dipsms    Partition: 8    Leader: 3    Replicas: 3,2    Isr: 3,2
    Topic: dipsms    Partition: 9    Leader: 1    Replicas: 1,2    Isr: 1,2
    Topic: dipsms    Partition: 10    Leader: 2    Replicas: 2,3    Isr: 2,3
    Topic: dipsms    Partition: 11    Leader: 3    Replicas: 3,1    Isr: 3,1
    Topic: dipsms    Partition: 12    Leader: 1    Replicas: 1,3    Isr: 1,3
    Topic: dipsms    Partition: 13    Leader: 2    Replicas: 2,1    Isr: 2,1
    Topic: dipsms    Partition: 14    Leader: 3    Replicas: 3,2    Isr: 3,2
    Topic: dipsms    Partition: 15    Leader: 1    Replicas: 1,2    Isr: 1,2
    Topic: dipsms    Partition: 16    Leader: 2    Replicas: 2,3    Isr: 2,3
    Topic: dipsms    Partition: 17    Leader: 3    Replicas: 3,1    Isr: 3,1
    Topic: dipsms    Partition: 18    Leader: 1    Replicas: 1,3    Isr: 1,3
    Topic: dipsms    Partition: 19    Leader: 2    Replicas: 2,1    Isr: 2,1
    Topic: dipsms    Partition: 20    Leader: 3    Replicas: 3,2    Isr: 3,2
    Topic: dipsms    Partition: 21    Leader: 1    Replicas: 1,2    Isr: 1,2
    Topic: dipsms    Partition: 22    Leader: 2    Replicas: 2,3    Isr: 2,3
    Topic: dipsms    Partition: 23    Leader: 3    Replicas: 3,1    Isr: 3,1
    Topic: dipsms    Partition: 24    Leader: 1    Replicas: 1,3    Isr: 1,3
    Topic: dipsms    Partition: 25    Leader: 2    Replicas: 2,1    Isr: 2,1
    Topic: dipsms    Partition: 26    Leader: 3    Replicas: 3,2    Isr: 3,2
    Topic: dipsms    Partition: 27    Leader: 1    Replicas: 1,2    Isr: 1,2
    Topic: dipsms    Partition: 28    Leader: 2    Replicas: 2,3    Isr: 2,3
    Topic: dipsms    Partition: 29    Leader: 3    Replicas: 3,1    Isr: 3,1
    Topic: dipsms    Partition: 30    Leader: 1    Replicas: 1,3    Isr: 1,3
    Topic: dipsms    Partition: 31    Leader: 2    Replicas: 2,1    Isr: 2,1
    Topic: dipsms    Partition: 32    Leader: 3    Replicas: 3,2    Isr: 3,2
    Topic: dipsms    Partition: 33    Leader: 1    Replicas: 1,2    Isr: 1,2
    Topic: dipsms    Partition: 34    Leader: 2    Replicas: 2,3    Isr: 2,3
    Topic: dipsms    Partition: 35    Leader: 3    Replicas: 3,1    Isr: 3,1
    Topic: dipsms    Partition: 36    Leader: 1    Replicas: 1,3    Isr: 1,3
    Topic: dipsms    Partition: 37    Leader: 2    Replicas: 2,1    Isr: 2,1
    Topic: dipsms    Partition: 38    Leader: 3    Replicas: 3,2    Isr: 3,2
    Topic: dipsms    Partition: 39    Leader: 1    Replicas: 1,2    Isr: 1,2
半兽人 -> 5年前

没问题,你停掉其他的broker试试。看看是都丢,还是某台会丢。

-> 半兽人 5年前

都试过,还是会丢,生产者是flume,今天我再测试下单节点flume和单节点kafka,看下是不是flume这块传输的时候丢失的

半兽人 -> 5年前

如果消息到kafka了,按照你这个,就不对丢了,你的思路没错。

你的答案

查看kafka相关的其他问题或提一个您自己的问题