kafka集成了springMVC之后启动项目客户端一直不停的输出日志.如何解决求大佬告知(以下只是一段日志这些日志一直不停的循环输出)

胡Ba一.ˇ° 发表于: 2018-10-18   最后更新时间: 2018-10-18 13:29:25   10,305 游览
11:58:33.701 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Added READ_UNCOMMITTED fetch request for partition event_log-0 at offset 214 to node 192.168.1.200:9092 (id: 0 rack: null)
11:58:33.701 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Sending READ_UNCOMMITTED fetch for partitions [event_log-0] to broker 192.168.1.200:9092 (id: 0 rack: null)
11:58:34.207 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Fetch READ_UNCOMMITTED at offset 214 for partition event_log-0 returned fetch data (error=NONE, highWaterMark=214, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0)
11:58:34.207 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Added READ_UNCOMMITTED fetch request for partition event_log-0 at offset 214 to node 192.168.1.200:9092 (id: 0 rack: null)
11:58:34.208 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Sending READ_UNCOMMITTED fetch for partitions [event_log-0] to broker 192.168.1.200:9092 (id: 0 rack: null)
11:58:34.713 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Fetch READ_UNCOMMITTED at offset 214 for partition event_log-0 returned fetch data (error=NONE, highWaterMark=214, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0)
11:58:34.713 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Added READ_UNCOMMITTED fetch request for partition event_log-0 at offset 214 to node 192.168.1.200:9092 (id: 0 rack: null)
11:58:34.713 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Sending READ_UNCOMMITTED fetch for partitions [event_log-0] to broker 192.168.1.200:9092 (id: 0 rack: null)
11:58:35.210 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=testGroup] Sending asynchronous auto-commit of offsets {event_log-0=OffsetAndMetadata{offset=214, metadata=''}}
11:58:35.230 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=testGroup] Committed offset 214 for partition event_log-0
11:58:35.231 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=testGroup] Completed asynchronous auto-commit of offsets {event_log-0=OffsetAndMetadata{offset=214, metadata=''}}
11:58:35.231 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Fetch READ_UNCOMMITTED at offset 214 for partition event_log-0 returned fetch data (error=NONE, highWaterMark=214, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0)
11:58:35.232 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Added READ_UNCOMMITTED fetch request for partition event_log-0 at offset 214 to node 192.168.1.200:9092 (id: 0 rack: null)
11:58:35.232 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Sending READ_UNCOMMITTED fetch for partitions [event_log-0] to broker 192.168.1.200:9092 (id: 0 rack: null)
11:58:35.738 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Fetch READ_UNCOMMITTED at offset 214 for partition event_log-0 returned fetch data (error=NONE, highWaterMark=214, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0)
11:58:35.738 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Added READ_UNCOMMITTED fetch request for partition event_log-0 at offset 214 to node 192.168.1.200:9092 (id: 0 rack: null)
11:58:35.738 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Sending READ_UNCOMMITTED fetch for partitions [event_log-0] to broker 192.168.1.200:9092 (id: 0 rack: null)
11:58:36.207 [kafka-coordinator-heartbeat-thread | testGroup] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=testGroup] Sending Heartbeat request to coordinator 192.168.1.200:9092 (id: 2147483647 rack: null)
11:58:36.220 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=testGroup] Received successful Heartbeat response
发表于 2018-10-18
添加评论

在main函数里面写这段: Logger.getLogger("org").setLevel(Level.ERROR)

把debug关了呀。。

没有开debug模式诶.....是要在哪里配置嘛..

是需要修改log4j的配置嘛?还是要配合其他的...

你程序的logback或者log4j里配置。

如何配置呢大佬...我搜了好多 没有搜到关于 log4j配置kafka的一些配置诶.不过那个log4j中的日志等级我也是INFO

新增加一条。    
<logger level="INFO" name="org.apache.kafka"/>

大佬我又试了好久......还是不行...可能因为我的 log4j是properties格式的 不是xml的  这个找了好多属性都没有诶....

那说明你的log4j没配好。。你先针对看看日志文件是否有效。

2018-10-18 15:12:56 -14680 [messageListenerContainer-kafka-consumer-1] DEBUG   - Received: 0 records
15:12:56.820 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Fetch READ_UNCOMMITTED at offset 214 for partition event_log-0 returned fetch data (error=NONE, highWaterMark=214, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0)
15:12:56.821 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Added READ_UNCOMMITTED fetch request for partition event_log-0 at offset 214 to node 192.168.1.200:9092 (id: 0 rack: null)
15:12:56.821 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Sending READ_UNCOMMITTED fetch for partitions [event_log-0] to broker 192.168.1.200:9092 (id: 0 rack: null)
15:12:57.328 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Fetch READ_UNCOMMITTED at offset 214 for partition event_log-0 returned fetch data (error=NONE, highWaterMark=214, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0)
15:12:57.329 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Added READ_UNCOMMITTED fetch request for partition event_log-0 at offset 214 to node 192.168.1.200:9092 (id: 0 rack: null)
15:12:57.329 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Sending READ_UNCOMMITTED fetch for partitions [event_log-0] to broker 192.168.1.200:9092 (id: 0 rack: null)
15:12:57.517 [kafka-coordinator-heartbeat-thread | testGroup] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=testGroup] Sending Heartbeat request to coordinator 192.168.1.200:9092 (id: 2147483647 rack: null)
15:12:57.527 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=testGroup] Received successful Heartbeat response

一直在重复打印这些日志 这是不是正常的?  感觉好像不能让它不打印.....

这些好像是因为我调用它consumer的API自己在控制台打印的一些日志.....我可以不要嘛...

把日志级别改为error

<logger name="org.apache.kafka"><level value="error"></level></logger>
这样设置了还是一直循环显示日志

谢谢谢谢  我马上看看

17:39:37.820 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=testGroup] Completed asynchronous auto-commit of offsets {event_log-0=OffsetAndMetadata{offset=268, metadata=''}}
17:39:38.221 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Fetch READ_UNCOMMITTED at offset 268 for partition event_log-0 returned fetch data (error=NONE, highWaterMark=268, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0)
17:39:38.221 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Added READ_UNCOMMITTED fetch request for partition event_log-0 at offset 268 to node 192.168.1.200:9092 (id: 0 rack: null)
17:39:38.221 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Sending READ_UNCOMMITTED fetch for partitions [event_log-0] to broker 192.168.1.200:9092 (id: 0 rack: null)

还在一直报这些日志好烦....感觉没救了...一天都在搞这个鬼东西....

<logger name="com.kafka.ighack.Consumer"  additivity="false">
  <level value="DEBUG"></level>
  <appender-ref ref="FILE"/>
 </logger>

 <logger name="org.apache.kafka">
  <level value="OFF"></level>
 </logger>
 <logger name="io.netty">
  <level value="OFF"></level>
 </logger>
 <logger name="com.lambdaworks.redis">
  <level value="OFF"></level>
 </logger>

这个配置是可以的吧?

我是感觉你的日志配置没有生效。并不是配置的不对。理论上一个程序只认一个log4j配置。

是吧..我也觉得..这项目中有 log4j.xnml 还有log4j.dtd还有log4j.properties 好剁log4j的配置不知道该删哪个好

把root的level修改到debug级别以上

root的level? 没懂..

我用的是spring-data-elasticsearch,我也碰到了,把application.properties里面的日志级别设置成logging.level.root= info,就好了

logger name="org.apache.kafka.clients.NetworkClient" level="debug"

试试这个

@胡Ba一.ˇ° 请问你这个问你找到解决方法了吗?现在我也遇到了同样的问题,尝试了下面提到的方法,但是都不行。

没有诶,我也是都不行.还在等

你知道它为什么会打印日志吗,在什么情况下会打印日志?我拿到数据进行消费的时候,如果处理过程抛出异常,它就会一直打印这个日志。如果数据处理过程没有异常的话,就不会打印这个日志。

我的话这个日志只要项目一启动他就会开始不停的发送心跳获取心跳打印这种日志.不停的刷屏很烦,消费数据也会有日志

你的答案

查看kafka相关的其他问题或提一个您自己的问题