-
Notifications
You must be signed in to change notification settings - Fork 12.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clang crashes during host-only CUDA compilation w/ precompiled headers. #106394
Comments
@llvm/issue-subscribers-clang-frontend Author: kadir çetinkaya (kadircet)
```
$ cat clang/test/Frontend/a.cpp
// RUN: %clang -xcuda -nocudainc --cuda-host-only -fsyntax-only -Xclang -emit-pch -std=gnu++20 %s > %t.pch
// RUN: %clang -xcuda -nocudainc --cuda-host-only -fsyntax-only -include-pch %t.pch -std=gnu++20 %s
#ifndef PREAMBLE
#define PREAMBLE
void *operator new(decltype(sizeof(int)));
#else
void foo() { delete (int *)0; };
#endif
```
|
I don't have an insight here.. I don't have experiences with CUDA nor PCH. There are a lot of |
Unfortunately not much insights from my side either, i tried to chase this a little bit, but the amount of |
How did we end up with this crash to start with? What do we want to have in the end? PCH is not useful for CUDA as it requires all compilations to be done with the same flags, and CUDA compilation uses different flags for host and device sub-compilations. We'd need a set of PCH files, one per sub-compilation, and the driver currently does not support this. Is there a sensible way to disable/ignore/error-out on PCH options during CUDA compilation? |
The real-life workflow involving this crash is clangd. It builds a PCH for all the So erroring out when we're about to perform multiple compilation jobs sounds sensible to me, but making it not crash when we have a single compilation job in host-only mode, reusing a PCH built with exact some configuration would be preferred for clangd users. Right now, this is rendering clangd useless (as it keeps crash looping) for all such files. |
I suspect it will be nearly useless even without the crash. Even if it manages to produce some results, I would not trust them.
The problem is -- CUDA compilation inherently depends on CUDA SDK and its headers. Compilation w/o them will give you all sorts of weird results as it will completely break function overloads between host/device/global functions. Yes, it may work OK on toy examples and subset of the host-only code, but I suspect most of the real CUDA code would be producing way too much noise. Overloads will look like conflicting redeclarations/redefinitions, some GPU-side functions will be missing, a lot of commonly used types unavailable, C++ templated code will be instantiated in a wrong way, etc. That said, I don't mind fixing the crash, there's clearly something odd happening there, just not sure that it will be of much help with your end use case. |
Well I am not a CUDA developer myself, so I can't really say much about this. All I know is, some people were actually putting effort back in the day to make sure it was "useful". That being said, I am not sure to which extend they succeeded (or we did a good job of not regressing that).
Just to be clear, this is a minimized repro from an actual invocation we have internally that actually sets cuda libraries properly. The repro here is triggering the crash through the same stack trace, without any external dependencies. |
Got it. I'll see what I can do to fix it. |
This is odd. Even though PCH generation and the crashing test both compile only host code, the overloads for the delete all set implicit
|
Looks like the AST in the compiler run that created the .pch file contained both
Now the question is -- where did we lose the |
The text was updated successfully, but these errors were encountered: