环境:
6台服务器:
192.168.1.80 node01
192.168.1.81 node02
192.168.1.82 node03
192.168.1.83 node04
192.168.1.84 node05
192.168.1.85 node06
zookeeper 集群(3节点):
node01
node02
node03
kafka 集群(4 节点)
node01
node02
node03
node04
测试服务器一台(部署 生产和消费程序,使用新版API开发):
node05
kerberos 服务器一台
node06
zookeeper 配置:
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/home/zookeeper/data
clientPort=2181
server.0=node01:2888:3888
server.1=node02:2888:3888
server.2=node03:2888:3888
dataLogDir=/home/zookeeper/log
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
jaasLoginRenew=3600000
zookeeper 启动脚本,zkServer.sh
中增加
export KERBEROS_PARAM="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/etc/kafka/zookeeper_jaas.conf"
kafka 配置:启动脚本
kafka-server-start.sh 中增加
export KAFKA_KERBEROS_PARAMS="-Djava.security.krb5.conf=/etc/kafka/krb5.conf -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas_zk.conf -Dzookeeper.sasl.client.username=kafka"
配置文件
#broker config
broker.id=1
delete.topic.enable=true
#listeners=PLAINTEXT://node01:9092
listeners=SASL_PLAINTEXT://node01:9092
#num.network.threads=3
#num.io.threads=8
#socket.send.buffer.bytes=1024000
#socket.receive.buffer.bytes=1024000
#socket.request.max.bytes=104857600
log.dirs=/data1/kafka-logs,/data2/kafka-logs,/data3/kafka-logs
log.retention.hours=5
#num.partitions=1
#log.flush.interval.messages=10000
#log.flush.interval.ms=5000
#log.retention.bytes=1073741824
#log.segment.bytes=1073741824
#log.retention.check.interval.ms=300000
zookeeper.connect=node01:2181,node02:2181,node03:2181
#zookeeper.connection.timeout.ms=6000
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.enabled.mechanisms=GSSAPI
sasl.kerberos.service.name=kafka
kafka_server_jaas.conf 文件:
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/node01.HXSAI.keytab"
principal="kafka/node01@HXSAI.COM"
serviceName="kafka";
};
// Zookeeper client authentication
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/node01.HXSAI.keytab"
principal="kafka/node01@HXSAI.COM"
serviceName="kafka22";
};
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/node01.HXSAI.keytab"
principal="kafka/node01@HXSAI.COM"
useTicketCache=true;
};
krb5.conf 文件:
# Configuration snippets may be placed in this directory as well
includedir /etc/krb5.conf.d/
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
default_realm = HXSAI.COM
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
#dns_lookup_realm = false
#default_realm = EXAMPLE.COM
#rdns = false
#default_ccache_name = KEYRING:persistent:%{uid}
[realms]
HXSAI.COM = {
kdc = node06:88
admin_server = node06:749
}
[domain_realm]
node01 = HXSAI.COM
node02 = HXSAI.COM
node03 = HXSAI.COM
node04 = HXSAI.COM
node06 = HXSAI.COM
~
zookeeper_jaas.conf 文件:
Server{
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
useTicketCache=false
keyTab="/etc/security/keytabs/node01.HXSAI.keytab"
principal="kafka/node01@HXSAI.COM"
serviceName="kafka";
};
基于以上配置,在node05上启动生产和消费程序,都不能正常生产和消费,但是如果把node04的kafka_server_jaas.conf 修改为
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/node01.HXSAI.keytab"
principal="kafka/node01@HXSAI.COM"
serviceName="kafka";
};
// Zookeeper client authentication
//Client {
// com.sun.security.auth.module.Krb5LoginModule required
// useKeyTab=true
// storeKey=true
// keyTab="/etc/security/keytabs/node01.HXSAI.keytab"
// principal="kafka/node01@HXSAI.COM"
// serviceName="kafka22";
//};
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/node01.HXSAI.keytab"
principal="kafka/node01@HXSAI.COM"
useTicketCache=true;
};
就可以正常生产和消费数据了
另一个问题,如果只给kafka 配置kerberos 加固,停止kafka 集群时zookeeper 集群不会知道异常退出,但是如果给kafka 和zookeeper 集群都配置了kerberos加固,执行kafka-server-stop.sh 时,zookeeper 集群就会异常退出,
请博主帮忙看看问题出在哪里
谢谢
你的答案