环境
版本:kafka_2.12-2.3.0
主机名:orchome
LSB Version: :core-4.1-amd64:core-4.1-noarch
Distributor ID: CentOS
Description: CentOS Linux release 7.5.1804 (Core)
Release: 7.5.1804
Codename: Core
Linux version 3.10.0-862.el7.x86_64 (builder@kbuilder.dev.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-28) (GCC) ) #1 SMP Fri Apr 20 16:44:24 UTC 2018
kerberos生成principal
## 创建principal
sudo /usr/sbin/kadmin.local -q 'addprinc -randkey zookeeper/orchome@EXAMPLE.COM'
sudo /usr/sbin/kadmin.local -q 'addprinc -randkey kafka/orchome@EXAMPLE.COM'
sudo /usr/sbin/kadmin.local -q 'addprinc -randkey clients/orchome@EXAMPLE.COM'
sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/kafka_server.keytab kafka/orchome@EXAMPLE.COM"
sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/kafka_server.keytab zookeeper/orchome@EXAMPLE.COM"
sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/kafka_client.keytab clients/orchome@EXAMPLE.COM"
## 检查
klist -t -e -k /etc/security/keytabs/kafka_zookeeper.keytab
klist -t -e -k /etc/security/keytabs/kafka_server.keytab
klist -t -e -k /etc/security/keytabs/kafka_client.keytab
各个文件详情
more /etc/krb5.conf
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
default_realm = EXAMPLE.COM
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
[realms]
EXAMPLE.COM = {
kdc = orchome
admin_server = orchome
}
[domain_realm]
kafka = EXAMPLE.COM
zookeeper = EXAMPLE.COM
clients = EXAMPLE.COM
kadmin.local
Authenticating as principal root/admin@EXAMPLE.COM with password.
kadmin.local: listprincs
K/M@EXAMPLE.COM
admin/admin@EXAMPLE.COM
clients/orchome@EXAMPLE.COM
kadmin/admin@EXAMPLE.COM
kadmin/changepw@EXAMPLE.COM
kadmin/orchome@EXAMPLE.COM
kafka/orchome@EXAMPLE.COM
krbtgt/EXAMPLE.COM@EXAMPLE.COM
krbtgt/orchome@EXAMPLE.COM
zookeeper/orchome@EXAMPLE.COM
klist -t -e -k /var/kerberos/krb5kdc/kafka.keytab
Keytab name: FILE:/var/kerberos/krb5kdc/kafka.keytab
KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
3 07/24/16 00:58:30 kafka/orchome@EXAMPLE.COM (aes256-cts-hmac-sha1-96)
3 07/24/16 00:58:30 kafka/orchome@EXAMPLE.COM (aes128-cts-hmac-sha1-96)
3 07/24/16 00:58:30 kafka/orchome@EXAMPLE.COM (des3-cbc-sha1)
3 07/24/16 00:58:30 kafka/orchome@EXAMPLE.COM (arcfour-hmac)
3 07/24/16 00:58:30 kafka/orchome@EXAMPLE.COM (des-hmac-sha1)
3 07/24/16 00:58:30 kafka/orchome@EXAMPLE.COM (des-cbc-md5)
2 07/24/16 12:23:18 zookeeper/orchome@EXAMPLE.COM (aes256-cts-hmac-sha1-96)
2 07/24/16 12:23:18 zookeeper/orchome@EXAMPLE.COM (aes128-cts-hmac-sha1-96)
2 07/24/16 12:23:18 zookeeper/orchome@EXAMPLE.COM (des3-cbc-sha1)
2 07/24/16 12:23:18 zookeeper/orchome@EXAMPLE.COM (arcfour-hmac)
2 07/24/16 12:23:18 zookeeper/orchome@EXAMPLE.COM (des-hmac-sha1)
2 07/24/16 12:23:18 zookeeper/orchome@EXAMPLE.COM (des-cbc-md5)
2 07/25/16 11:31:37 kafka/127.0.0.1@EXAMPLE.COM (aes256-cts-hmac-sha1-96)
2 07/25/16 11:31:37 kafka/127.0.0.1@EXAMPLE.COM (aes128-cts-hmac-sha1-96)
2 07/25/16 11:31:37 kafka/127.0.0.1@EXAMPLE.COM (des3-cbc-sha1)
2 07/25/16 11:31:37 kafka/127.0.0.1@EXAMPLE.COM (arcfour-hmac)
2 07/25/16 11:31:37 kafka/127.0.0.1@EXAMPLE.COM (des-hmac-sha1)
2 07/25/16 11:31:37 kafka/127.0.0.1@EXAMPLE.COM (des-cbc-md5)
3 07/25/16 13:13:31 kafka/orchome@EXAMPLE.COM (aes256-cts-hmac-sha1-96)
3 07/25/16 13:13:31 kafka/orchome@EXAMPLE.COM (aes128-cts-hmac-sha1-96)
3 07/25/16 13:13:31 kafka/orchome@EXAMPLE.COM (des3-cbc-sha1)
3 07/25/16 13:13:31 kafka/orchome@EXAMPLE.COM (arcfour-hmac)
3 07/25/16 13:13:31 kafka/orchome@EXAMPLE.COM (des-hmac-sha1)
3 07/25/16 13:13:31 kafka/orchome@EXAMPLE.COM (des-cbc-md5)
2 07/25/16 15:07:58 zookeeper/127.0.0.1@EXAMPLE.COM (aes256-cts-hmac-sha1-96)
2 07/25/16 15:07:58 zookeeper/127.0.0.1@EXAMPLE.COM (aes128-cts-hmac-sha1-96)
2 07/25/16 15:07:58 zookeeper/127.0.0.1@EXAMPLE.COM (des3-cbc-sha1)
2 07/25/16 15:07:58 zookeeper/127.0.0.1@EXAMPLE.COM (arcfour-hmac)
2 07/25/16 15:07:58 zookeeper/127.0.0.1@EXAMPLE.COM (des-hmac-sha1)
2 07/25/16 15:07:58 zookeeper/127.0.0.1@EXAMPLE.COM (des-cbc-md5)
2 07/25/16 18:47:55 clients@EXAMPLE.COM (aes256-cts-hmac-sha1-96)
2 07/25/16 18:47:55 clients@EXAMPLE.COM (aes128-cts-hmac-sha1-96)
2 07/25/16 18:47:55 clients@EXAMPLE.COM (des3-cbc-sha1)
2 07/25/16 18:47:55 clients@EXAMPLE.COM (arcfour-hmac)
2 07/25/16 18:47:55 clients@EXAMPLE.COM (des-hmac-sha1)
2 07/25/16 18:47:55 clients@EXAMPLE.COM (des-cbc-md5)
more /etc/kafka/zookeeper_jaas.conf
Server{
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
useTicketCache=false
keyTab="/etc/security/keytabs/kafka_zookeeper.keytab"
principal="zookeeper/orchome@EXAMPLE.COM";
};
more /etc/kafka/kafka_server_jaas.conf
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/kafka_server.keytab"
principal="kafka/orchome@EXAMPLE.COM";
};
// Zookeeper client authentication
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/kafka_server.keytab"
principal="kafka/orchome@EXAMPLE.COM";
};
more /etc/kafka/kafka_client_jaas.conf
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/kafka_client.keytab"
principal="clients/orchome@EXAMPLE.COM";
};
more config/server.properties
listeners=SASL_PLAINTEXT://orchome:9093
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.enabled.mechanisms=GSSAPI
sasl.kerberos.service.name=kafka
more start-zk-and-kafka.sh
#!/bin/bash
export KAFKA_HEAP_OPTS='-Xmx256M'
export KAFKA_OPTS='-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/etc/kafka/zookeeper_jaas.conf'
bin/zookeeper-server-start.sh config/zookeeper.properties &
sleep 5
export KAFKA_OPTS='-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf'
bin/kafka-server-start.sh config/server.properties
more config/zookeeper.properties
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
jaasLoginRenew=3600000
more config/producer.properties/consumer.properties
security.protocol=SASL_PLAINTEXT
sasl.mechanism=GSSAPI
sasl.kerberos.service.name=kafka
more producer2.sh
export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/etc/kafka/kafka_client_jaas.conf"
bin/kafka-console-producer.sh --broker-list orchome:9093 --topic test --producer.config config/producer.properties
more consumer2.sh
export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/etc/kafka/kafka_client_jaas.conf"
bin/kafka-console-consumer.sh --bootstrap-server orchome:9093 --topic test --newest --newnew-consumerr --fromig config --topic test --new-consumer --from-beginning --consumer.config config/consumer.properties
请问一下配置文件里orchome这个域名,如果服务器没有域名,双网口两个ip,这个域名要配成哪个ip呀,用不用在hosts文件里加映射
orchome是主机名,映射是不是双网卡都是要加的。
kafka_zookeeper.keytab,kafka.keytab 哪里冒出来的,是不是配置文件少了
下载解决了吗?
楼主大神好:
我的按上面配置后,启动kafka-server出现以下错误
[2020-01-08 17:31:04,658] ERROR SASL authentication failed using login context 'Client'. (org.apache.zookeeper.client.ZooKeeperSaslClient) [2020-01-08 17:31:04,661] INFO zookeeper state changed (AuthFailed) (org.I0Itec.zkclient.ZkClient) [2020-01-08 17:31:04,664] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread) [2020-01-08 17:31:04,667] FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) org.I0Itec.zkclient.exception.ZkAuthFailedException: Authentication failure at org.I0Itec.zkclient.ZkClient.waitForKeeperState(ZkClient.java:947) at org.I0Itec.zkclient.ZkClient.waitUntilConnected(ZkClient.java:924) at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1231) at org.I0Itec.zkclient.ZkClient.(ZkClient.java:157) at org.I0Itec.zkclient.ZkClient.(ZkClient.java:131)
zookeeper 报一下错误:
[2020-01-08 17:31:05,077] WARN caught end of stream exception (org.apache.zookeeper.server.NIOServerCnxn) EndOfStreamException: Unable to read additional data from client sessionid 0x16f847e09240000, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:239) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:203) at java.lang.Thread.run(Thread.java:748)
认证失败了,你细致一些。
把对应的日志监控起来,来排查问题。
是的,应该是kafka连接zookeeper时认证出现问题,我看了kerberos的日志没有出现问题,用zkCii访问zookeeper也没有出现问题。现在可以看到有问题的日志就只有这两个了。配置已经仔细对了好多次,都从新搭建了两次,还是同样的问题,不知道是什么原因
看到zookeeper日志有如下提示,cnxn.saslServer is null: cnxn object did not initialize its saslServer properly,不知道是不是需要对zookeeper做什么操作
2020-01-09 10:08:34,162 [myid:] - ERROR [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2182:ZooKeeperServer@968] - cnxn.saslServer is null: cnxn object did not initialize its saslServer properly. 2020-01-09 10:08:34,583 [myid:] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2182:NIOServerCnxn@360] - caught end of stream exception EndOfStreamException: Unable to read additional data from client sessionid 0x16f880d50a70001, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:231) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) at java.lang.Thread.run(Thread.java:748)
这个问题已经解决,分享一下
我原先是在/bin/zkServer.sh里添加了KAFKA_OPTS,后来在别的资料上看到有的配的名字是JVMFLAGS,改成JVMFLAGS后,重新启动就好了
楼主大神你好,按照上面的配置配置了krb,生产者和消费者都可以正常使用,但是bin/kafka-run-class.sh这个脚本却不能正常运行
再导入这个环境变量后
export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.confi g=/etc/kafka/kafka_client_jaas.conf" [root@luonan kafka]# bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 192.168.8.143:9093 -topic test --time -1 [2019-08-20 00:59:17,776] WARN [Consumer clientId=GetOffsetShell, groupId=] Bootstrap broker 192.168.8.143:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient) [2019-08-20 00:59:18,184] WARN [Consumer clientId=GetOffsetShell, groupId=] Bootstrap broker 192.168.8.143:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient) [2019-08-20 00:59:18,597] WARN [Consumer clientId=GetOffsetShell, groupId=] Bootstrap broker 192.168.8.143:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
按照网上的方法 https://blog.csdn.net/lyflyyvip/article/details/85715801 也不成功
请问楼主是有什么方法可以解决(文件路劲和topic信息都没有错误)
在用spark消费kafka的时候也遇到这个disconnected的问题,请问是那块出了问题
我也出现了这个问题,请问解决了吗?
楼主您好请问kafka_client_jaas.conf配置文件内容使什么?然后我按照您的配置后可以查看topic和新建topic。但是通过命令行bin/kafka-console-producer.sh --broker-list dc-server11:6667 --topic test --producer.config config/producer.properties执行后就会出现 WARN Bootstrap broker dc-server11:6667 disconnected (org.apache.kafka.clients.NetworkClient)请问是不是我先需要新建一个 客户端的keyTab,然后写入到kafka_client_jaas.conf并引入到JVM参数,才可以?
安装上面的配置,在centos7上搭建和kerberos,并按照上面的步骤配置kafkakerberos认证。zookeeper启动成功,显示信息为:
[2018-12-0714:39:01,096]INFOTGTrefreshsleepinguntil:SatDec0810:02:52CST2018(org.apache.zookeeper.Login) [2018-12-0714:39:01,108]INFObindingtoport0.0.0.0/0.0.0.0:2181(org.apache.zookeeper.server.NIOServerCnxnFactory)。启动kafkabroker时显示INFO[KafkaServerid=1]started(kafka.server.KafkaServer) [2018-12-0714:39:54,052]ERROR[Controllerid=1,targetBrokerId=1]Connectiontonode1failedauthenticationdueto:AuthenticationfailedduetoinvalidcredentialswithSASLmechanismGSSAPI(org.apache.kafka.clients.NetworkClient)
求问这个认证失败是什么原因呀,拜托了,其他的jaas文件和properties文件和您的一样
具体的错误信息为:
[2018-12-07 14:45:09,983] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient) [2018-12-07 14:45:10,059] INFO [/kafka-acl-changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) [2018-12-07 14:45:10,127] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) [2018-12-07 14:45:10,175] INFO [SocketServer brokerId=1] Started processors for 1 acceptors (kafka.network.SocketServer) [2018-12-07 14:45:10,192] INFO Kafka version : 1.1.1 (org.apache.kafka.common.utils.AppInfoParser) [2018-12-07 14:45:10,192] INFO Kafka commitId : 8e07427ffb493498 (org.apache.kafka.common.utils.AppInfoParser) [2018-12-07 14:45:10,194] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) [2018-12-07 14:45:10,299] ERROR [Controller id=1, targetBrokerId=1] Connection to node 1 failed authentication due to: Authentication failed due to invalid credentials with SASL mechanism GSSAPI (org.apache.kafka.clients.NetworkClient) zookeeper那边对应的日志为: [2018-12-07 14:45:09,977] INFO Accepted socket connection from /10.201.83.55:48436 (org.apache.zookeeper.server.NIOServerCnxnFactory) [2018-12-07 14:45:09,979] INFO Client attempting to establish new session at /10.201.83.55:48436 (org.apache.zookeeper.server.ZooKeeperServer) [2018-12-07 14:45:09,982] INFO Established session 0x167876410970003 with negotiated timeout 6000 for client /10.201.83.55:48436 (org.apache.zookeeper.server.ZooKeeperServer) [2018-12-07 14:45:09,994] INFO Successfully authenticated client: authenticationID=kafka/weiwei@EXAMPLE.COM; authorizationID=kafka/weiwei@EXAMPLE.COM. (org.apache.zookeeper.server.auth.SaslServerCallbackHandler) [2018-12-07 14:45:09,994] INFO Setting authorizedID: kafka/weiwei@EXAMPLE.COM (org.apache.zookeeper.server.auth.SaslServerCallbackHandler) [2018-12-07 14:45:09,994] INFO adding SASL authorization for authorizationID: kafka/weiwei@EXAMPLE.COM (org.apache.zookeeper.server.ZooKeeperServer) [2018-12-07 14:45:09,995] INFO Got user-level KeeperException when processing sessionid:0x167876410970003 type:create cxid:0x3 zxid:0x94 txntype:-1 reqpath:n/a Error Path:/kafka-acl Error:KeeperErrorCode = NodeExists for /kafka-acl (org.apache.zookeeper.server.PrepRequestProcessor)
你好,storm 消费kafka,认证不通过,storm 、kafka、zookeeper都需要认证。认证storm_jaas.conf配置如下:
StormServer { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/etc/security/keytabs/nimbus.service.keytab" storeKey=true useTicketCache=false principal="nimbus/zdhdpvdca03.crhd0a.crc.hk@ZDHDPVDCA01.CRHD0A.CRC.HK"; }; StormClient { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/etc/security/keytabs/storm.headless.keytab" storeKey=true useTicketCache=false serviceName="nimbus" principal="storm-bdos@ZDHDPVDCA01.CRHD0A.CRC.HK"; }; Client { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/etc/security/keytabs/storm.headless.keytab" storeKey=true useTicketCache=false serviceName="zookeeper" principal="storm-bdos@ZDHDPVDCA01.CRHD0A.CRC.HK"; }; KafkaClient { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/etc/security/keytabs/storm.headless.keytab" storeKey=true useTicketCache=false serviceName="kafka" principal="storm-bdos@ZDHDPVDCA01.CRHD0A.CRC.HK"; };
错误信息:
2018-10-29 16:40:14.267 o.a.z.c.ZooKeeperSaslClient [ERROR] An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7) - UNKNOWN_SERVER)]) occurred when evaluating Zookeeper Quorum Member's received SASL token. This may be caused by Java's being unable to resolve the Zookeeper Quorum Member's hostname correctly. You may want to try to adding '-Dsun.net.spi.nameservice.provider.1=dns,sun' to your client's JVMFLAGS environment. Zookeeper Client will go to AUTH_FAILED state. 2018-10-29 16:40:14.267 o.a.z.ClientCnxn [ERROR] SASL authentication with Zookeeper Quorum member failed: javax.security.sasl.SaslException: An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7) - UNKNOWN_SERVER)]) occurred when evaluating Zookeeper Quorum Member's received SASL token. This may be caused by Java's being unable to resolve the Zookeeper Quorum Member's hostname correctly. You may want to try to adding '-Dsun.net.spi.nameservice.provider.1=dns,sun' to your client's JVMFLAGS environment. Zookeeper Client will go to AUTH_FAILED state. 2018-10-29 16:40:14.268 o.a.c.ConnectionState [ERROR] Authentication failed 2018-10-29 16:40:14.281 b.s.util [ERROR] Async loop died! java.lang.RuntimeException: java.lang.RuntimeException: org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /brokers/topics/test/partitions at storm.kafka.DynamicBrokersReader.getBrokerInfo(DynamicBrokersReader.java:82) ~[stormjar.jar:?]
你好你的问题解决了吗,我现在也是storm对接kafka, kafka和zookeeper都有kerberos认证
请问可以动态添加用户吗? 加了用户是不是要更新keytab, 重启kafka之类的? 望解答
https://www.orchome.com/553
搜索一下“动态”
感谢回复, 搜了下,还不是很明白, 如果设置了acl, 不同用户去生产消费, 是不是需要用不同的客户端实例? producer是不是就不能使用同一个了
你好,我再kafka上设置好了sasl认证,然后再Java程序中的生产者和消费者设置了
System.setProperty("java.security.auth.login.config", "/Users/Sean/Documents/Gitrep/bigdata/kafka/src/main/resources/kafka_client_jaas.conf"); // 环境变量添加,需要输入配置文件的路径 props.put("security.protocol", "SASL_PLAINTEXT"); props.put("sasl.mechanism", "PLAIN");,现在我生产和消费报这个错Caused by: java.lang.IllegalArgumentException: Could not find a 'KafkaClient' entry in the JAAS configuration. System property 'java.security.auth.login.config' is /home/hadoop/kafka_2.11-1.1.0/config/kafka_client_jaas.conf
没有这个属性额,你注意下你的错误。
为什么没有这个属性呢,很多帖子都有呢
你这个目前解决没有呢,我也遇见这个问题了
我的意思是没有找到KafkaClient这个条目,在JAAS配置中。
我的有KafkaClietn这个条目,java程序可以读取,现在是kettle ETL工具,使用kafka consumer组件,每次都报
问题专区里提个帖子吧,带上代码,详细描述下。
你好,请问这个问题解决了吗?
没有解决,我是kettle ETL工具中使用kafka组件
看下kafka_client_jaas.conf 文件编写的 格式对不对 client{ ;}; 注意 总共两个分号 “;”
hi, 按照文档配置,遇到
求大神解答
这个是说客户端代码目前不支持。