You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using error_handle in async producer, it ends up having duplicate messages in queue over different callbacks.
In example below, it generates a new UUID on every call.
When bringing down Kafka, it correctly calls the error_handle function. But the log shows the for subsequent error handle calls, that the queue object contains previous items that were already part of another error_handle call, in some cases it happens more than one time.
The end result is that individual items within the rescued queue then end up many times, sometimes 4x, downstrean in another recovery process.
Is this expected behavior? And if so, is there any way to find out what items in the queue have already been sent to the error handle, so they duplicate ones can be removed?
When using
error_handle
in async producer, it ends up having duplicate messages inqueue
over different callbacks.In example below, it generates a new UUID on every call.
When bringing down Kafka, it correctly calls the
error_handle
function. But the log shows the for subsequent error handle calls, that the queue object contains previous items that were already part of anothererror_handle
call, in some cases it happens more than one time.The end result is that individual items within the rescued queue then end up many times, sometimes 4x, downstrean in another recovery process.
Is this expected behavior? And if so, is there any way to find out what items in the queue have already been sent to the error handle, so they duplicate ones can be removed?
The text was updated successfully, but these errors were encountered: