Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trying to Reuse Kafka Instance Fails #2425

Closed
JaimeOnaindia opened this issue Feb 9, 2024 · 1 comment
Closed

Trying to Reuse Kafka Instance Fails #2425

JaimeOnaindia opened this issue Feb 9, 2024 · 1 comment

Comments

@JaimeOnaindia
Copy link

JaimeOnaindia commented Feb 9, 2024

Hello, im trying to reuse kafka instance but it fails when its used to send information to another topic that it is not the first
Traceback:

Traceback (most recent call last):
File "/snap/pycharm-community/364/plugins/python-ce/helpers/pydev/_pydevd_bundle/pydevd_exec2.py", line 3, in Exec
exec(exp, global_vars, local_vars)
File "", line 1, in
File "/home/jaime/.virtualenvs/portico/lib/python3.10/site-packages/kafka/producer/kafka.py", line 576, in send
self._wait_on_metadata(topic, self.config['max_block_ms'] / 1000.0)
File "/home/jaime/.virtualenvs/portico/lib/python3.10/site-packages/kafka/producer/kafka.py", line 702, in _wait_on_metadata
raise Errors.KafkaTimeoutError(
kafka.errors.KafkaTimeoutError: KafkaTimeoutError: Failed to update metadata after 5.0 secs.

Kafka Implementation:

`class KafkaProducerSingleton:
_instance = None
_lock = threading.Lock()

@classmethod
def get_instance(cls, *args, **kwargs):
    if cls._instance is None:
        with cls._lock:
            if cls._instance is None:
                cls._instance = KafkaProducer(
                    bootstrap_servers=settings.KAFKA_TRACES_CLUSTER,
                    api_version=(0, 10),
                    retries=100,
                    max_block_ms=5000
                )
    return cls._instance`
@dpkp
Copy link
Owner

dpkp commented Feb 13, 2025

It looks like you're failing to update metadata before your max_block_ms timeout is triggered. Consider increasing your timeout and/or investigating why you are unable to update metadata. Also wrapping your calls to producer.send w/ try+except for KafkaTimeoutError (especially if you are tuning max_block_ms to be lower than the default)

@dpkp dpkp closed this as not planned Won't fix, can't repro, duplicate, stale Feb 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants