kafka平衡leader

半兽人 发表于: 2015-03-10   最后更新时间: 2017-12-20  
  •   267 订阅,8714 游览

Whenever a broker stops or crashes leadership for that broker's partitions transfers to other replicas. This means that by default when the broker is restarted it will only be a follower for all its partitions, meaning it will not be used for client reads and writes.

当一个broker停止或崩溃时,这个broker中所有分区的leader将转移给其他副本。这意味着在默认情况下,当这个broker重启之后,它的所有分区都将仅作为follower,不再用于客户端的读写操作。


To avoid this imbalance, Kafka has a notion of preferred replicas. If the list of replicas for a partition is 1,5,9 then node 1 is preferred as the leader to either node 5 or 9 because it is earlier in the replica list. You can have the Kafka cluster try to restore leadership to the restored replicas by running the command:
为了避免这种不平衡,Kafka有一个首选副本的概念。如果一个分区的副本列表是1,5,9,节点1将优先作为其他两个副本5和9的leader,因为它较早存在于副本中。你可以通过运行以下命令让Kafka集群尝试恢复已恢复正常的副本的leader地位:

 > bin/kafka-preferred-replica-election.sh --zookeeper zk_host:port/chroot

Since running this command can be tedious you can also configure Kafka to do this automatically by setting the following configuration:

手动运行很无趣,你可以通过这个配置设置为自动执行:
    auto.leader.rebalance.enable=true 






发表于: 1年前   最后更新时间: 7月前   游览量:8714
上一条: 优雅的关闭kafka
下一条: kafka镜像集群之间的数据

评论…


  • 可能存在的翻译问题:
    Whenever a broker stops or crashes leadership for that broker's partitions transfers to other replicas. This means that by default when the broker is restarted it will only be a follower for all its partitions, meaning it will not be used for client reads and writes.
    当一个broker停止或崩溃时,这个broker中所有分区的leader将转移给其他副本。这意味着在默认情况下,当这个broker重启之后,它的所有分区都将仅作为follower,不再用于客户端的读写操作。

    To avoid this imbalance, Kafka has a notion of preferred replicas. If the list of replicas for a partition is 1,5,9 then node 1 is preferred as the leader to either node 5 or 9 because it is earlier in the replica list. You can have the Kafka cluster try to restore leadership to the restored replicas by running the command:
    为了避免这种不平衡,Kafka有一个首选副本的概念。如果一个分区的副本列表是1,5,9,节点1将优先作为其他两个副本5和9的leader,因为它较早存在于副本中。您可以通过运行以下命令让Kafka集群尝试恢复已恢复正常的副本的leader地位:
  • 评论…
    • in this conversation
      提问