CDH6.3.2集群配套的2.2.1版本kafka配置SASL_SCRAM认证配置的问题

呵呵哒 发表于: 2023-11-13   最后更新时间: 2023-11-13 15:45:31   403 游览

从git中找到关于配置kafka的SASL_SCRAM认证配置的教程,尝试配置发现几个问题, 原文链接如下: https://github.com/dyeaster/document/blob/main/%E5%9F%BA%E4%BA%8ECDH%E5%B9%B3%E5%8F%B0%E7%9A%84kafka%20SASL_SCRAM%E8%AE%A4%E8%AF%81%E9%85%8D%E7%BD%AE.md#kafka-saslscram%E8%AE%A4%E8%AF%81%E9%85%8D%E7%BD%AE

  1. 关于kafka变量broker_java_opts在哪里, 这个我没有找到该参数存在的配置文件, 因此想知道这个参数到底有没有, 该怎样配置

  2. producer.properties 这个配置文件也没有自带, 是要重新写么?

发表于 2023-11-13
¥5.0

1、broker_java_opts这个文件,应该指的的是java的环境,你可以在bin/kafka-server-start.sh中,追加你的配置,如:

export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G -Djava.security.auth.login.config=/etc/kafka/conf/kafka_server_jaas.conf"

为了让jvm加载配置。

2、producer.propertiesconfig目录中,是kafka默认自带的配置文件,此配置文件为命令行使用,而且还需要主动指定,例如:

bin/kafka-console-producer.sh --broker-list localhost:9093 --topic test --producer.config config/producer.properties

参考自:kafka实战SASL/SCRAM

呵呵哒 -> 半兽人 6月前

producer.properties这个文件我就是没在config目录下找到 找了好几个都没看到 才觉得奇怪

半兽人 -> 呵呵哒 6月前

没用过CDH额,这个配置是默认生成的,不过文件内容很简单,只是为了方便你命令形态的东西,写到里面变成固定的,不用每次带那么多参数了,你可以下载一个官方的版本的kafka来获取。

producer.properties的默认内容如下:

cat config/producer.properties
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.producer.ProducerConfig for more details

############################# Producer Basics #############################

# list of brokers used for bootstrapping knowledge about the rest of the cluster
# format: host1:port1,host2:port2 ...
bootstrap.servers=localhost:9092

# specify the compression codec for all data generated: none, gzip, snappy, lz4
compression.type=none

# name of the partitioner class for partitioning events; default partition spreads data randomly
#partitioner.class=

# the maximum amount of time the client will wait for the response of a request
#request.timeout.ms=

# how long `KafkaProducer.send` and `KafkaProducer.partitionsFor` will block for
#max.block.ms=

# the producer will wait for up to the given delay to allow other records to be sent so that the sends can be batched together
#linger.ms=

# the maximum size of a request in bytes
#max.request.size=

# the default batch size in bytes when batching multiple records sent to a partition
#batch.size=

# the total bytes of memory the producer can use to buffer records waiting to be sent to the server
#buffer.memory=
呵呵哒 -> 半兽人 6月前

实测cdh集群安装的kafka和原生kafka生成的配置文件不同,很多都不一样,原生kafka的这几个配置文件拿来用不了, 也不晓得魔改了啥,放弃了打算另寻他路,谢谢大佬帮助

你的答案

查看kafka相关的其他问题或提一个您自己的问题