You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
The default span processor which is a batch span processor, and the default maximum queue size which is 2048, are very susceptible to span loss for a system under any non-trivial load.
Describe the solution you'd like
It should be possible, with the java opentelemetry SDK, to configure a span processor which is batching but has an unbounded queue size, to ensure no span loss under arbitrary load, at the cost of unbounded memory usage.
Describe alternatives you've considered
With the current jctools-based implementation, one can configure otel.bsp.max.queue.size to a very large value, at most 1073741824. This probably achieves the same in practice, but is a workaround.
Is your feature request related to a problem? Please describe.
The default span processor which is a batch span processor, and the default maximum queue size which is 2048, are very susceptible to span loss for a system under any non-trivial load.
Describe the solution you'd like
It should be possible, with the java opentelemetry SDK, to configure a span processor which is batching but has an unbounded queue size, to ensure no span loss under arbitrary load, at the cost of unbounded memory usage.
Describe alternatives you've considered
With the current
jctools
-based implementation, one can configureotel.bsp.max.queue.size
to a very large value, at most 1073741824. This probably achieves the same in practice, but is a workaround.Additional context
I'm unsure whether adding the possibility of making the queue unbounded contradicts the specification at https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/sdk.md#batching-processor
If so, perhaps a new batching processor type can be introduced, UnboundedBatchingSpanProcessor.
The text was updated successfully, but these errors were encountered: