Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add cartesian_product to ParallelIterator #1182

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

nascheinkman
Copy link

I was trying to use cartesian products from itertools and noticed it's not implemented for rayon and has an open issue: #754

I wasn't trying to preallocate, just run a computation in the parallel iterator using a cartesian product generated by ranges. I ended up with this solution that worked for me. It's not implemented for IndexedParallelIterator like suggested in the issue, only for ParallelIterator. I don't know enough to know how this would impact performance.

Feel free to close this if it's not done properly or out of scope. I'm also open to suggestions if there's a better way to do this.

Comment on lines 43 to 44
.into_par_iter()
.map(|i_item| repeat(i_item.clone()).zip(self.j.clone()))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are the calls to into_par_iter (I already is a ParallelIterator here) and i_item.clone() (repeat will clone it repeatedly, but the first clone seems unnecessary) really required here?

I also think we can use flat_map here instead of map and flatten to possibly benefit from additional optimizations in its implementation.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch!

@wagnerf42
Copy link
Contributor

hi, you have to be careful because flat_map can end up being quite expensive compared to flat_map_iter.
i'm not sure what's the right choice here because on small products you might want to get fully parallel.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants