-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Alternative approaches to "fan-out" style RepartitionExec #14287
Comments
Thanks @westonpace for filing this -- I agree there is likely some improvements in this area that would be beneficial I believe @crepererum spent quite a bit of time on the current RepartitionExec so maybe he has some comments to share Also, I was just speaking with @ozankabak the other day and this exact topic came up (Improvements in RepartitionExec). I can't remember if he said @jayzhan211 was thinking about it or not 🤔 |
BTW my suggestion for a first step would be to get some example query / test case that shows where the current algorithm doesn't work very well. Then we can evaluate potential solutions in the context of how they affect that example |
We have designed a poll-based repartition mechanism that polls its input whenever any of the output partitions are polled. This approach deviates from the round-robin pattern, and instead ensures a truly even workload distribution for consumer partitions. A batch is sent to the partition that has completed its computation and is ready to process the next data. This mechanism also exhibits prefetching behavior, similar to SortPreservingMerge, although the prefetching is limited to a single batch (or potentially up to the number of partitions—this will be finalized based on benchmark results). The implementation is currently underway, and the initial benchmark results are very promising. Theoretically, this approach should perform better especially in scenarios where the producer pace is higher than consumer side, which is the case I believe @westonpace mentions in the issue description. @Weijun-H is working on the implementation, and I hope we open the PR in the coming weeks once it is in a robust and optimized state. |
IIRC the
The work stealing approach sounds reasonable but is also somewhat a hack. I think if you don't know the output polling rate, then distributing data to the different outputs at a fixed rate (that's what round-robin does) isn't a great idea. I think finding a fast MPMC channel w/ a fixed capacity (to implement limited buffering) might be good. |
I'll make an attempt this week to create a reproducer that triggers the memory issues we were seeing. The filter was "string contains" (or possibly "not string contains") on a string column that contained "LLM prompts" (e.g. paragraph sized English prose generally less than 1KB) |
I am working on OnDemandRepartition these weeks, these use cases would benefit the benchmark. |
Is your feature request related to a problem or challenge?
RepartitionExec
is often used to fan out batches from a single partition into multiple partitions. For example, if we are scanning a very big parquet file we use theRepartitionExec
to take the batches we receive from the Parquet file and fan it out to multiple partitions so that the data can be processed in parallel. Note: theseRepartitionExec
are often not setup by hand but rather inserted by the plan optimizer.The current approach sets up a channel per partition and (I believe) emits batches in a round-robin order. This works well when the consumer is faster than the producer (typical in simple queries) or the workload is evenly balanced. However, when the workload is skewed this leads to problems.
Describe the solution you'd like
Work stealing queues come to mind. I think there's a some literature on putting these to use in databases. They can be designed fairly efficiently. Maybe there are some solid Rust implementations (building one from scratch might be a bit annoying).
Otherwise, a simple and slow mutex-bound MPMC queue might be a nice alternative to at least avoid the memory issues (if not fix the performance issues).
There could be plenty of other approaches as well.
Describe alternatives you've considered
I don't have a good workaround at the moment.
Additional context
A smattering of Discord conversation: https://discord.com/channels/885562378132000778/1331914577935597579
The text was updated successfully, but these errors were encountered: