-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JIT: Unblock Vector###<long> intrinsics on x86 #112728
base: main
Are you sure you want to change the base?
Conversation
Tagging subscribers to this area: @JulieLeeMSFT, @jakobbotsch |
197fac5
to
628d4f8
Compare
628d4f8
to
3a130c8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is ready for review.
cc @tannergooding
// Keep casts with operands usable from memory. | ||
if (castOp->isContained() || castOp->IsRegOptional()) | ||
{ | ||
return op; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This condition, added in #72719, made this method effectively useless. Removing it was a zero-diff change. I can look in future at containing the casts rather than removing them.
@@ -4677,19 +4539,16 @@ GenTree* Lowering::LowerHWIntrinsicCreate(GenTreeHWIntrinsic* node) | |||
return LowerNode(node); | |||
} | |||
|
|||
GenTree* op2 = node->Op(2); | |||
|
|||
// TODO-XArch-AVX512 : Merge the NI_Vector512_Create and NI_Vector256_Create paths below. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The churn in this section is just taking care of this TODO
|
||
assert(comp->compIsaSupportedDebugOnly(InstructionSet_SSE2)); | ||
|
||
tmp2 = InsertNewSimdCreateScalarUnsafeNode(TYP_SIMD16, op2, simdBaseJitType, 16); | ||
LowerNode(tmp2); | ||
|
||
node->ResetHWIntrinsicId(NI_SSE_MoveLowToHigh, tmp1, tmp2); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changing this to UnpackLow
shows up as a regression in a few places, because it's one byte larger, but it enables other optimizations since unpcklpd
takes a memory operand and movlhps
doesn't:
- movups xmm0, xmmword ptr [reloc @RWD00]
- movlhps xmm1, xmm0
+ unpcklpd xmm1, xmmword ptr [reloc @RWD00]
if (varDsc->lvIsParam) | ||
{ | ||
// Promotion blocks combined read optimizations for SIMD loads of long params | ||
return; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In isolation, this change produced a small number of diffs and was mostly an improvement. A few regressions show up in the SPMI reports, but the overall impact is good, especially considering the places we can load a long to vector with movq
The primary motivation here is resolving the many TODOs around HWIntrinsics involving scalar longs on x86. We're currently blocking a lot of intrinsics from expansion because of inability to handle long values.
The most significant change here is in promoting
CreateScalar
andToScalar
to code generating intrinsics instead of converting them to other intrinsics at lowering. This unblocks several optimizations since we can now allowCreateScalar
andToScalar
to be contained and can specialize codegen depending on whether they end up loading/storing from/to memory or not. Some example improvements on x64:Vector128.CreateScalar(ref float)
:Vector128.CreateScalar(ref double)
:Vector128.CreateScalarUnsafe(ref short)
:ref byte b = Vector128<byte>.ToScalar()
:And the less realistic, but still interesting
Sse.AddScalar(Vector128.CreateScalar(ref float), Vector128.CreateScalar(ref float)).ToScalar()
:x86 diffs are much more significant, because of the newly-enabled intrinsic expansion: