1、FETCH_SESSION_ID_NOT_FOUND

2020-06-12 01:11:17.894 [Kafka Fetcher for Source: Custom Source -> Map -> Filter (1/4)] INFO  org.apache.kafka.clients.FetchSessionHandler  - [Consumer clientId=consumer-6, groupId=igg-user-chat-monitor] Node 3 was unable to process the fetch request with (sessionId=306724715, epoch=12245): FETCH_SESSION_ID_NOT_FOUND.
2020-06-12 01:11:17.894 [Kafka Fetcher for Source: Custom Source -> Map -> Filter (4/4)] INFO  org.apache.kafka.clients.FetchSessionHandler  - [Consumer clientId=consumer-5, groupId=igg-user-chat-monitor] Node 2 was unable to process the fetch request with (sessionId=660800203, epoch=12293): FETCH_SESSION_ID_NOT_FOUND.
2020-06-12 01:11:17.894 [Kafka Fetcher for Source: Custom Source -> Map -> Filter (2/4)] INFO  org.apache.kafka.clients.FetchSessionHandler  - [Consumer clientId=consumer-8, groupId=igg-user-chat-monitor] Node 4 was unable to process the fetch request with (sessionId=1302608464, epoch=12256): FETCH_SESSION_ID_NOT_FOUND.
2020-06-12 01:11:17.894 [Kafka Fetcher for Source: Custom Source -> Map -> Filter (1/4)] INFO  org.apache.kafka.clients.FetchSessionHandler  - [Consumer clientId=consumer-6, groupId=igg-user-chat-monitor] Node 3 was unable to process the fetch request with (sessionId=306724715, epoch=12245): FETCH_SESSION_ID_NOT_FOUND.

原因分析:

由于 kafka 的 segement 是可以滚动的,每次当 kafka 滚动或者删除 log segement file 时,就会有这个提示错误。这是 kafka 的一个bug,在高版本(> 2.0.0)已修复,虽然高版本也有这个提示,但是不影响

如,server.properties 配置了

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000

解决办法:

升级 kafka 版本至 2.0.0 以上。
 

 

 

 

Logo

Kafka开源项目指南提供详尽教程,助开发者掌握其架构、配置和使用,实现高效数据流管理和实时处理。它高性能、可扩展,适合日志收集和实时数据处理,通过持久化保障数据安全,是企业大数据生态系统的核心。

更多推荐