You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Once in a while, in our K8s cluster, we see periods of high resource churn, which pushes lots of events to the cluster's clustersynchroManager resulting in large CPU spikes. What is the recommended way to ensure that this leader elected pod is able to handle the load?
Do I increase worker count? What is to high / too low?
Can we shard clustersynchroManager by resource uid so multiple pods can handle the work?
Can we prune how/which updates the clustersynchroManager watches / listens for? I see there is a feature to prune fields but can that be expands / made more dynamic to suit specific needs? https://clusterpedia.io/docs/features/prune-fields/
Why is this needed?
Given the single pod nature of clustersynchroManager, it is important to make sure the pod can handle the rate of updates needed to sync the cluster
The text was updated successfully, but these errors were encountered:
Hi @TNonet,
Thanks for opening an issue!
We will look into it as soon as possible.
Details
Instructions for interacting with me using comments are available here.
If you have questions or suggestions related to my behavior, please file an issue against the gh-ci-bot repository.
What would you like to be added?
Once in a while, in our K8s cluster, we see periods of high resource churn, which pushes lots of events to the cluster's clustersynchroManager resulting in large CPU spikes. What is the recommended way to ensure that this leader elected pod is able to handle the load?
Why is this needed?
Given the single pod nature of clustersynchroManager, it is important to make sure the pod can handle the rate of updates needed to sync the cluster
The text was updated successfully, but these errors were encountered: