提问说明
kafka再运行一天左右的时间就会出现这个错误,然后就停止了服务。
kafka版本为2.3.1版本,在Windows环境下
ERROR Failed to clean up log for __consumer_offsets-22 in dir E:\kafka\kafka_2.11-2.3.1\bin\windows\kafkalogsdata-333 due to IOException (kafka.server.LogDirFailureChannel)
java.nio.file.FileSystemException: E:\kafka\kafka_2.11-2.3.1\bin\windows\kafkalogsdata-333\__consumer_offsets-22\00000000000000000000.index.cleaned -> E:\kafka\kafka_2.11-2.3.1\bin\windows\kafkalogsdata-333\__consumer_offsets-22\00000000000000000000.index.swap: 另一个程序正在使用此文件,进程无法访问。
at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
at sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
at java.nio.file.Files.move(Files.java:1395)
at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:815)
at kafka.log.AbstractIndex.renameTo(AbstractIndex.scala:209)
at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:509)
at kafka.log.Log$$anonfun$replaceSegments$1.apply(Log.scala:2036)
at kafka.log.Log$$anonfun$replaceSegments$1.apply(Log.scala:2036)
at scala.collection.immutable.List.foreach(List.scala:392)
at kafka.log.Log.replaceSegments(Log.scala:2036)
at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:602)
at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:528)
at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:527)
at scala.collection.immutable.List.foreach(List.scala:392)
at kafka.log.Cleaner.doClean(LogCleaner.scala:527)
at kafka.log.Cleaner.clean(LogCleaner.scala:501)
at kafka.log.LogCleaner$CleanerThread.cleanLog(LogCleaner.scala:359)
at kafka.log.LogCleaner$CleanerThread.cleanFilthiestLog(LogCleaner.scala:328)
at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:307)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89)
Suppressed: java.nio.file.FileSystemException: E:\kafka\kafka_2.11-2.3.1\bin\windows\kafkalogsdata-333\__consumer_offsets-22\00000000000000000000.index.cleaned -> E:\kafka\kafka_2.11-2.3.1\bin\windows\kafkalogsdata-333\__consumer_offsets-22\00000000000000000000.index.swap: 另一个程序正在使用此文件,进程无法访问。
at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301)
at sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
at java.nio.file.Files.move(Files.java:1395)
at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:812)
... 16 more
感觉之前的kafka没有死干净,建议你重启pc,重新启动,在观察一下。
换日志目录会启动起来,这是线上的错误,我本地也有这个错误,有办法能让他不清理日志嘛?我磁盘够用
线上不建议用windowns。
还是上面说的,要找一下谁占用了这个文件。查看占用的进程号,如果是没死掉kafka老进程,就强杀掉。
换目录,相当于该节点kafka的数据都放弃了。
那个错误是,kafka清理程序线程在工作。
Windows资源管理器查看占用太卡了,直接未响应,是不是消费者影响的呢?线上还跑着个logstash有关系呢?暂时换不了linux系统,有没有办法让它不清理或者减少清理频次,我这配置文件配置的是90天这三个参数,log.retention.ms=777600000,log.retention.minutes=129600,log.retention.hours=2160
但是没啥用还是平均一天就挂一次
log.cleaner.enable=false
来自:https://www.orchome.com/472
好的 我改下试试
大佬 问下这个问题解决了吗?
你的答案