Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expose GCS object storage HTTP configuration #9777

Closed
56quarters opened this issue Oct 30, 2024 · 0 comments · Fixed by #9778
Closed

Expose GCS object storage HTTP configuration #9777

56quarters opened this issue Oct 30, 2024 · 0 comments · Fixed by #9778

Comments

@56quarters
Copy link
Contributor

Is your feature request related to a problem? Please describe.

The configuration for the S3 object storage backend includes an http block that allows tuning the specifics of how HTTP requests are made. The GCS configuration does not.

Describe the solution you'd like

The object storage library we use exposes HTTP configuration for GCS and S3 backends. We should be able to set HTTP tuning parameters on GCS as well. It's just a question of exposing this in Mimir.

Describe alternatives you've considered

N/A

Additional context

Specifically, not being able to tune max_idle_conns and max_idle_conns_per_host causes connection churn when there are 1000s of requests made at once.

56quarters added a commit that referenced this issue Oct 30, 2024
This change allows the HTTP client used by GCS and Azure object storage
clients to be configured in the same way that we allow the S3 object
storage client to be configured. This will allow us to increase the
number of idle connections kept around in busy cells and avoid connection
churn when performing many object storage requests at once.

Fixes #9777
56quarters added a commit that referenced this issue Oct 30, 2024
This change allows the HTTP client used by GCS and Azure object storage
clients to be configured in the same way that we allow the S3 object
storage client to be configured. This will allow us to increase the
number of idle connections kept around in busy cells and avoid connection
churn when performing many object storage requests at once.

Fixes #9777
56quarters added a commit that referenced this issue Oct 30, 2024
This change allows the HTTP client used by GCS and Azure object storage
clients to be configured in the same way that we allow the S3 object
storage client to be configured. This will allow us to increase the
number of idle connections kept around in busy cells and avoid connection
churn when performing many object storage requests at once.

Fixes #9777
56quarters added a commit that referenced this issue Oct 30, 2024
This change allows the HTTP client used by GCS and Azure object storage
clients to be configured in the same way that we allow the S3 object
storage client to be configured. This will allow us to increase the
number of idle connections kept around in busy cells and avoid connection
churn when performing many object storage requests at once.

Fixes #9777
56quarters added a commit that referenced this issue Oct 30, 2024
This change allows the HTTP client used by GCS and Azure object storage
clients to be configured in the same way that we allow the S3 object
storage client to be configured. This will allow us to increase the
number of idle connections kept around in busy cells and avoid connection
churn when performing many object storage requests at once.

Fixes #9777
56quarters added a commit that referenced this issue Oct 31, 2024
* Allow HTTP configuration for GCS and Azure storage clients

This change allows the HTTP client used by GCS and Azure object storage
clients to be configured in the same way that we allow the S3 object
storage client to be configured. This will allow us to increase the
number of idle connections kept around in busy cells and avoid connection
churn when performing many object storage requests at once.

Fixes #9777

* Code review changes

* Code review changes
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant