Kafka Rebanlace次数过高问题
为什么会下线:1.心跳原因:hearbeat.interval.ms和session.timeout.ms2. 拉取间隔 : max.poll.interval.ms
Kafka Rebanlace次数过高问题
环境:
Kafka Server 2.6.x
Kafka Client Java 2.8.2
缘起:
最近发现Kafka Rebalance次数着实有点多,一天达到了六十多次,感觉不太正常,于是查了下日志发现:
Offset commit cannot be completed since
the consumer is not part of an active group for auto partition assignment;
it is likely that the consumer was kicked out of the group.
大意是某个kakfa client提交offset失败,因为已经在分组中下线。
为什么会下线?
我们来了解下什么情况下会掉线,常见情况如下:
1. 心跳原因:
kafka在n次心跳未收到后认为这个kafka client已经离线,于是server端会踢下线,至于n次是多少次,需要计算,有两个参数,一个是heartbeat.interval.ms
,代表多久一次心跳,默认是3000ms,也就是3秒,还有一个参数是session.timeout.ms
,代表保持session的超时时间,默认10000ms,也就是10秒。n = session.timeout.ms / heartbeat.interval.ms
,也就是说3次之后不到第四次就会被踢下线,至于为什么不是正好3倍,官网解释是heartbeat.interval.ms
的值建议小于session.timeout.ms
的 1/3
,两个参数官网解释如下:
session.timeout.ms
The timeout used to detect client failures when using Kafka’s group management facility. The client sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove this client from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration bygroup.min.session.timeout.ms
andgroup.max.session.timeout.ms
.
Type: int
Default: 10000 (10 seconds)
heartbeat.interval.ms
The expected time between heartbeats to the consumer coordinator when using Kafka’s group management facilities. Heartbeats are used to ensure that the consumer’s session stays active and to facilitate rebalancing when new consumers join or leave the group. The value must be set lower thansession.timeout.ms
, but typically should be set no higher than1/3
of that value. It can be adjusted even lower to control the expected time for normal rebalances.
Type: int
Default: 3000 (3 seconds)
以上来自Kafka官网 https://kafka.apache.org/28/documentation.html#consumerconfigs
2. 拉取间隔原因
和这个原因有关的参数是max.poll.interval.ms
,这个参数的意思是两次poll()
操作之间如果超过了这个值,也会被服务端踢下线,默认300000ms,也就是300秒,5分钟。
max.poll.interval.ms
The maximum delay between invocations ofpoll()
when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. Ifpoll()
is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member. For consumers using a non-nullgroup.instance.id
which reach this timeout, partitions will not be immediately reassigned. Instead, the consumer will stop sending heartbeats and partitions will be reassigned after expiration ofsession.timeout.ms
. This mirrors the behavior of a static consumer which has shutdown.
Type: int
Default: 300000 (5 minutes)
以上来自Kafka官网 https://kafka.apache.org/28/documentation.html#consumerconfigs
定位
当时做性能优化的时候,这个kafka处理逻辑统计了时间于是找到了以下日志:
当前拉取了数据条数 10 耗时 411260ms thread: KafkaXxxReceiver-pool-3
处理10条数据居然用了411260ms,这是只是其中一条,通过模糊查询还找到了更多了超过300秒的数据,已经确认是这里的问题了。
优化思路
- 适当调大参数
max.poll.interval.ms
,或者调小每次拉取的消息数max.poll.records
。 - 因之前压测未出现此问题,需要进一步定位到底是哪一块用时较长,进行业务上的优化。
更多推荐
所有评论(0)