kafka 启动kerberos后消费者报错

漂泊的美好 发表于: 2016-08-29   最后更新时间: 2016-08-30  
  •   13 订阅,1266 游览

已测试kerberos可以正常使用,不进行kinit principal是无法list topic的。但是现在不能消费者一直在报这个错误,如果不ctrl+c,会一直不间断报错,

[root@vmw201 KAFKA]# bin/kafka-console-consumer --zookeeper 172.16.18.201:2181/kafka --topic test2
[2016-08-29 21:33:01,613] WARN [console-consumer-63083_vmw201-1472477574752-b6f712cb-leader-finder-thread], Failed to find leader for Set([test2,4], [test2,0], [test2,1], [test2,3], [test2,2], [test2,5]) (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
kafka.common.BrokerEndPointNotAvailableException: End point PLAINTEXT not found for broker 394
    at kafka.cluster.Broker.getBrokerEndPoint(Broker.scala:141)
    at kafka.utils.ZkUtils$$anonfun$getAllBrokerEndPointsForChannel$1.apply(ZkUtils.scala:180)
    at kafka.utils.ZkUtils$$anonfun$getAllBrokerEndPointsForChannel$1.apply(ZkUtils.scala:180)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
    at scala.collection.AbstractTraversable.map(Traversable.scala:104)
    at kafka.utils.ZkUtils.getAllBrokerEndPointsForChannel(ZkUtils.scala:180)
    at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:65)
    at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)

说是找不到分区leader,但是分区leader是有的

[root@vmw201 KAFKA]# bin/kafka-topics -describe --zookeeper 172.16.18.201:2181/kafka --topic test2
Topic:test2    PartitionCount:6    ReplicationFactor:1    Configs:
    Topic: test2    Partition: 0    Leader: 394    Replicas: 394    Isr: 394
    Topic: test2    Partition: 1    Leader: 395    Replicas: 395    Isr: 395
    Topic: test2    Partition: 2    Leader: 396    Replicas: 396    Isr: 396
    Topic: test2    Partition: 3    Leader: 394    Replicas: 394    Isr: 394
    Topic: test2    Partition: 4    Leader: 395    Replicas: 395    Isr: 395
    Topic: test2    Partition: 5    Leader: 396    Replicas: 396    Isr: 396

我在网上查过,说是和新的consumer API有关系,请大神指教。







发表于: 1年前   最后更新时间: 1年前   游览量:1266
上一条: 关于kafka kerberos 配置的若干问题,求大神解惑
下一条: kafka启用kerberos后,新producer和consumer产生的问题,请前辈给指导下
评论…

  • 只能用新的

    # 新消费者
    bin/kafka-console-consumer.sh --bootstrap-server 10.211.55.5:9093 --topic test --new-consumer --from-beginning --consumer.config config/consumer.properties

    # 新生产者
    bin/kafka-console-producer.sh --broker-list 10.211.55.5:9093 --topic test --producer.config config/producer.properties

    • [root@vmw201 KAFKA]# bin/kafka-console-producer --broker-list 172.16.18.201:9093 --topic test2 --producer.config etc/kafka/conf.dist/producer.properties
      [2016-08-30 13:34:43,673] ERROR The TGT cannot be renewed beyond the next expiry date: Wed Aug 31 13:27:45 CST 2016.This process will not be able to authenticate new SASL connections after that time (for example, it will not be able to authenticate a new connection with a Kafka Broker).  Ask your system administrator to either increase the 'renew until' time by doing : 'modprinc -maxrenewlife null ' within kadmin, or instead, to generate a keytab for null. Because the TGT's expiry cannot be further extended by refreshing, exiting refresh thread now. (org.apache.kafka.common.security.kerberos.Login)  报了这个错,怎么解决?
      • 评论…
        • in this conversation
          提问