看样子只能用这种比较傻的解决方案了,倒也能用,就是看上去不够舒适,哈哈,强迫症觉得5个kafka节点用不同的端口不够整齐
kerberos是共享秘钥的策略,理论上是可以,但是随机对于kafka是不行的。
客户端要和kafka上的分区建立长连接的,你随机分配到别的地方了,但是别的kafka是没有这个分区的。
这个是发生错误时后台看到的日志
Oct 22 15:18:54 cdh-master01 krb5kdc[5642](info): TGS_REQ (4 etypes {18 17 16 23}) 10.2.7.140: ISSUE: authtime 1571661422, etypes {rep=18 tkt=18 ses=18}, hdfs/cdh-master01@GREE.IO for HTTP/cdh-master03@GREE.IO
Oct 22 15:18:54 cdh-master01 krb5kdc[5642](info): TGS_REQ (4 etypes {18 17 16 23}) 10.2.7.140: ISSUE: authtime 1571661422, etypes {rep=18 tkt=18 ses=18}, hdfs/cdh-master01@GREE.IO for HTTP/cdh-master02@GREE.IO
这个是正常时(即不走nginx转发)后台看到的日志,以上同为5个节点
Oct 22 15:13:19 cdh-master01 krb5kdc[5642](info): AS_REQ (3 etypes {17 16 23}) 172.17.1.44: ISSUE: authtime 1571728399, etypes {rep=17 tkt=18 ses=17}, 260269@GREE.IO for krbtgt/GREE.IO@GREE.IO
Oct 22 15:13:19 cdh-master01 krb5kdc[5642](info): TGS_REQ (3 etypes {17 16 23}) 172.17.1.44: ISSUE: authtime 1571728399, etypes {rep=17 tkt=18 ses=17}, 260269@GREE.IO for kafka/kafka01@GREE.IO
Oct 22 15:13:20 cdh-master01 krb5kdc[5642](info): TGS_REQ (3 etypes {17 16 23}) 172.17.1.44: ISSUE: authtime 1571728399, etypes {rep=17 tkt=18 ses=17}, 260269@GREE.IO for kafka/kafka03@GREE.IO
Oct 22 15:13:20 cdh-master01 krb5kdc[5642](info): TGS_REQ (3 etypes {17 16 23}) 172.17.1.44: ISSUE: authtime 1571728399, etypes {rep=17 tkt=18 ses=17}, 260269@GREE.IO for kafka/kafka04@GREE.IO
Oct 22 15:13:20 cdh-master01 krb5kdc[5642](info): TGS_REQ (3 etypes {17 16 23}) 172.17.1.44: ISSUE: authtime 1571728399, etypes {rep=17 tkt=18 ses=17}, 260269@GREE.IO for kafka/kafka02@GREE.IO
Oct 22 15:13:20 cdh-master01 krb5kdc[5642](info): TGS_REQ (3 etypes {17 16 23}) 172.17.1.44: ISSUE: authtime 1571728399, etypes {rep=17 tkt=18 ses=17}, 260269@GREE.IO for kafka/kafka05@GREE.IO
如果我把nginx的配置改一下,改成如下
stream {
server {
listen 30000;
proxy_pass kafka01;
}
upstream kafka01 {
server kafka01:30000 weight=1;
}
server {
listen 30001;
proxy_pass kafka02;
}
upstream kafka02 {
server kafka02:30001 weight=1;
}
server {
listen 30002;
proxy_pass kafka03;
}
upstream kafka03 {
server kafka03:30002 weight=1;
}
server {
listen 30003;
proxy_pass kafka04;
}
upstream kafka04 {
server kafka04:30003 weight=1;
}
server {
listen 30004;
proxy_pass kafka05;
}
upstream kafka05 {
server kafka05:30004 weight=1;
}
HOST保持不变
10.2.21.33 kafka01
10.2.21.33 kafka02
10.2.21.33 kafka03
10.2.21.33 kafka04
10.2.21.33 kafka05
这样的话即可正常使用,我有一个猜测,不知道合不合理,之前的nginx配置是监听9092端口,把所有发到9092的数据随机发到kafka01-05的5台机器,有可能出现这种情况,生产者那边请求的地址是kafka01:9092却被nginx转发懂啊kafka02:9092,这样kerberos认证是不是会不通过。
虽然让nginx监听5个端口可以解决这个问题,但是还是太不方便了。不知道大神有没有解决办法