kafka集成了springMVC之后启动项目客户端一直不停的输出日志.如何解决求大佬告知(以下只是一段日志这些日志一直不停的循环输出)

11:58:33.701 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Added READ_UNCOMMITTED fetch request for partition event_log-0 at offset 214 to node 192.168.1.200:9092 (id: 0 rack: null)
11:58:33.701 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Sending READ_UNCOMMITTED fetch for partitions [event_log-0] to broker 192.168.1.200:9092 (id: 0 rack: null)
11:58:34.207 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Fetch READ_UNCOMMITTED at offset 214 for partition event_log-0 returned fetch data (error=NONE, highWaterMark=214, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0)
11:58:34.207 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Added READ_UNCOMMITTED fetch request for partition event_log-0 at offset 214 to node 192.168.1.200:9092 (id: 0 rack: null)
11:58:34.208 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Sending READ_UNCOMMITTED fetch for partitions [event_log-0] to broker 192.168.1.200:9092 (id: 0 rack: null)
11:58:34.713 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Fetch READ_UNCOMMITTED at offset 214 for partition event_log-0 returned fetch data (error=NONE, highWaterMark=214, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0)
11:58:34.713 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Added READ_UNCOMMITTED fetch request for partition event_log-0 at offset 214 to node 192.168.1.200:9092 (id: 0 rack: null)
11:58:34.713 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Sending READ_UNCOMMITTED fetch for partitions [event_log-0] to broker 192.168.1.200:9092 (id: 0 rack: null)
11:58:35.210 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=testGroup] Sending asynchronous auto-commit of offsets {event_log-0=OffsetAndMetadata{offset=214, metadata=''}}
11:58:35.230 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=testGroup] Committed offset 214 for partition event_log-0
11:58:35.231 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=testGroup] Completed asynchronous auto-commit of offsets {event_log-0=OffsetAndMetadata{offset=214, metadata=''}}
11:58:35.231 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Fetch READ_UNCOMMITTED at offset 214 for partition event_log-0 returned fetch data (error=NONE, highWaterMark=214, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0)
11:58:35.232 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Added READ_UNCOMMITTED fetch request for partition event_log-0 at offset 214 to node 192.168.1.200:9092 (id: 0 rack: null)
11:58:35.232 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Sending READ_UNCOMMITTED fetch for partitions [event_log-0] to broker 192.168.1.200:9092 (id: 0 rack: null)
11:58:35.738 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Fetch READ_UNCOMMITTED at offset 214 for partition event_log-0 returned fetch data (error=NONE, highWaterMark=214, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0)
11:58:35.738 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Added READ_UNCOMMITTED fetch request for partition event_log-0 at offset 214 to node 192.168.1.200:9092 (id: 0 rack: null)
11:58:35.738 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Sending READ_UNCOMMITTED fetch for partitions [event_log-0] to broker 192.168.1.200:9092 (id: 0 rack: null)
11:58:36.207 [kafka-coordinator-heartbeat-thread | testGroup] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=testGroup] Sending Heartbeat request to coordinator 192.168.1.200:9092 (id: 2147483647 rack: null)
11:58:36.220 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=testGroup] Received successful Heartbeat response





发表于: 1月前   最后更新时间: 1月前   游览量:314
上一条: 到头了!
下一条: 已经是最后了!

评论…


  • @胡Ba一.ˇ° 请问你这个问你找到解决方法了吗?现在我也遇到了同样的问题,尝试了下面提到的方法,但是都不行。
    • 你知道它为什么会打印日志吗,在什么情况下会打印日志?我拿到数据进行消费的时候,如果处理过程抛出异常,它就会一直打印这个日志。如果数据处理过程没有异常的话,就不会打印这个日志。
        把root的level修改到debug级别以上
        把日志级别改为error
        • 17:39:37.820 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=testGroup] Completed asynchronous auto-commit of offsets {event_log-0=OffsetAndMetadata{offset=268, metadata=''}}
          17:39:38.221 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Fetch READ_UNCOMMITTED at offset 268 for partition event_log-0 returned fetch data (error=NONE, highWaterMark=268, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0)
          17:39:38.221 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Added READ_UNCOMMITTED fetch request for partition event_log-0 at offset 268 to node 192.168.1.200:9092 (id: 0 rack: null)
          17:39:38.221 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Sending READ_UNCOMMITTED fetch for partitions [event_log-0] to broker 192.168.1.200:9092 (id: 0 rack: null)

          还在一直报这些日志好烦....感觉没救了...一天都在搞这个鬼东西....
            • <logger name="com.kafka.ighack.Consumer"  additivity="false">
                <level value="DEBUG"></level>
                <appender-ref ref="FILE"/>
               </logger>

               <logger name="org.apache.kafka">
                <level value="OFF"></level>
               </logger>
               <logger name="io.netty">
                <level value="OFF"></level>
               </logger>
               <logger name="com.lambdaworks.redis">
                <level value="OFF"></level>
               </logger>


              这个配置是可以的吧?
                把debug关了呀。。
                • 2018-10-18 15:12:56 -14680 [messageListenerContainer-kafka-consumer-1] DEBUG   - Received: 0 records
                  15:12:56.820 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Fetch READ_UNCOMMITTED at offset 214 for partition event_log-0 returned fetch data (error=NONE, highWaterMark=214, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0)
                  15:12:56.821 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Added READ_UNCOMMITTED fetch request for partition event_log-0 at offset 214 to node 192.168.1.200:9092 (id: 0 rack: null)
                  15:12:56.821 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Sending READ_UNCOMMITTED fetch for partitions [event_log-0] to broker 192.168.1.200:9092 (id: 0 rack: null)
                  15:12:57.328 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Fetch READ_UNCOMMITTED at offset 214 for partition event_log-0 returned fetch data (error=NONE, highWaterMark=214, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0)
                  15:12:57.329 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Added READ_UNCOMMITTED fetch request for partition event_log-0 at offset 214 to node 192.168.1.200:9092 (id: 0 rack: null)
                  15:12:57.329 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=testGroup] Sending READ_UNCOMMITTED fetch for partitions [event_log-0] to broker 192.168.1.200:9092 (id: 0 rack: null)
                  15:12:57.517 [kafka-coordinator-heartbeat-thread | testGroup] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=testGroup] Sending Heartbeat request to coordinator 192.168.1.200:9092 (id: 2147483647 rack: null)
                  15:12:57.527 [messageListenerContainer-kafka-consumer-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=testGroup] Received successful Heartbeat response


                  一直在重复打印这些日志 这是不是正常的?  感觉好像不能让它不打印.....
                  • 评论…
                    • in this conversation