You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The offset commit appears to be blocked by-design with the assumption that the operation should resume without issue once the underlying network problem has been resolved. The issue appears to be that the consumer is not holding onto an exclusive client lock while it is waiting. This leads to a race condition between the main thread and the heartbeat thread due to a failure to maintain lock ordering.
The order of operations is as follows:
Consumer handles a message.
Network error occurs.
Consumer tries to commit offset. Commit blocks in an infinite loop and releases the KafkaClient lock on each attempt:
HeartbeatThread detects consumer timeout and tries to shutdown the coordinator, taking the client lock only after having already taken the coordinator lock in the step above (the inverse order in how the locks are taken by the main thread during the block commit operation):
It is admittedly a very tight window for a race condition but it does exist based on my own experience as well as that of others in the community. The problem can be avoided by allowing the consumer exclusive access to the KafkaClient while trying to commit the offset, or by ensuring that the heartbeat thread has exclusivity to the client while it is checking things out.
It should also be noted that, while I have only spelled out the race condition as it exists between the commit and heartbeat operations, I wouldn't be surprised if the heartbeat was also interfering with other operations because of this issue.
The text was updated successfully, but these errors were encountered:
After further investigation, I did find another avenue where this same deadlock can occur during a consumer.poll(). If the timing is just right, then there's a chance that the HeartbeatThread deadlocks with a subsequent invocation of ensure_coordinator_ready() during the consumer.poll() that occurs immediately before trying to read data from the buffer.
Is there any particular reason for the finer grain on the locking scheme within the HeartbeatThread after line 958? It feels more like code bumming than anything else. The HeartbeatThread isn't doing much so it's probably just fine to be safe and hold onto the client lock rather than try and let other processes have the client in the event that the HeartbeatThread doesn't actually need it. Clients would be better off looking to asyncio if they needed to support the degree of concurrency that such an optimization seems to cater to.
I believe that I have found the reason for the deadlock that has been alluded to in a few other issues on the board.
#2373
#2099
#2042
#1989
The offset commit appears to be blocked by-design with the assumption that the operation should resume without issue once the underlying network problem has been resolved. The issue appears to be that the consumer is not holding onto an exclusive client lock while it is waiting. This leads to a race condition between the main thread and the heartbeat thread due to a failure to maintain lock ordering.
The order of operations is as follows:
kafka-python/kafka/coordinator/consumer.py
Line 512 in 0864817
kafka-python/kafka/coordinator/base.py
Line 245 in 0864817
kafka-python/kafka/coordinator/base.py
Line 958 in 0864817
kafka-python/kafka/coordinator/base.py
Line 993 in 0864817
kafka-python/kafka/coordinator/base.py
Line 766 in 0864817
It is admittedly a very tight window for a race condition but it does exist based on my own experience as well as that of others in the community. The problem can be avoided by allowing the consumer exclusive access to the KafkaClient while trying to commit the offset, or by ensuring that the heartbeat thread has exclusivity to the client while it is checking things out.
It should also be noted that, while I have only spelled out the race condition as it exists between the commit and heartbeat operations, I wouldn't be surprised if the heartbeat was also interfering with other operations because of this issue.
The text was updated successfully, but these errors were encountered: