Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-51016][SQL] Fix for incorrect results on retry for Left Outer Join with indeterministic join keys #50029

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

ahshahid
Copy link

What changes were proposed in this pull request?

Added a new lazy boolean exprValHasIndeterministicCharacter in the Expression class, to indicate whether the expression's evaluation result was obtained using some indeterministic calculation.

The main difference between existing deterministic boolean and this is , that the former determines if same expression is evaluated more than once in a given iteration would its value change or not. Also on that basis, any leaf expression would always be deterministic.

But the latter indicates if an expression's evaluation used/uses any non deterministic component. So here even the leaf expression ( like Attribute) can have non deterministic character flag as true, if its pointing to some result obtained via indeterministic calculation.

Apart from that code of ShuffleDependency is augmented to indicate if the hashing expression is using any non deterministic component.

Why are the changes needed?

The method Stage.isIndeterminate is not returning true in case the ShuffleDependency is using Partitioner based on an inDeterministic Attribute. The bug is at Stage and RDD level. An expression which is using an Attribute derived from an inDeterministic Expression, looses the information, that its Output is using an InDeterministic component.

Does this PR introduce any user-facing change?

No

How was this patch tested?

Added unit tests / Bug test to validate the behaviour of InDeterministic Stages

Was this patch authored or co-authored using generative AI tooling?

No

@mridulm
Copy link
Contributor

mridulm commented Feb 22, 2025

It is unclear to me why the changes to spark core are required - marking the RDD with the appropriate DeterministicLevel should be sufficient.

@ahshahid
Copy link
Author

It is unclear to me why the changes to spark core are required - marking the RDD with the appropriate DeterministicLevel should be sufficient.
In case of ShuffleMap stage, the base RDD itself might be deterministic but the Partitioner may be not.
If any of the changes present are removed, tests will fail.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants