You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Now that a KA.jl back-end is part of CUDA.jl and being tested on CI, I encountered a couple of issues:
The tests generate output, while IMO the KA.jl test suite should be silent:
From worker 13: CUDA runtime 12.1, artifact installation
From worker 13: CUDA driver 12.1
From worker 13: NVIDIA driver 530.41.3
From worker 13:
From worker 13: Libraries:
From worker 13: - CUBLAS: 12.1.0
From worker 13: - CURAND: 10.3.2
From worker 13: - CUFFT: 11.0.2
From worker 13: - CUSOLVER: 11.4.4
From worker 13: - CUSPARSE: 12.0.2
From worker 13: - CUPTI: 18.0.0
From worker 13: - NVML: 12.0.0+530.41.3
From worker 13:
From worker 13: Toolchain:
From worker 13: - Julia: 1.8.5
From worker 13: - LLVM: 13.0.1
From worker 13: - PTX ISA support: 3.2, 4.0, 4.1, 4.2, 4.3, 5.0, 6.0, 6.1, 6.3, 6.4, 6.5, 7.0, 7.1, 7.2
From worker 13: - Device capability support: sm_37, sm_50, sm_52, sm_53, sm_60, sm_61, sm_62, sm_70, sm_72, sm_75, sm_80, sm_86
From worker 13:
From worker 13: 1 device:
From worker 13: 0: Quadro RTX 5000 (sm_75, 9.102 GiB / 15.000 GiB available)
From worker 13: Hello from thread 1!
From worker 13: Hello from thread 2!
From worker 13: Hello from thread 3!
From worker 13: Hello from thread 4!
From worker 13: Why this should work
LLVM.jl 5.0 removes the atomic macros (triggering an abort on LLVM 5.0), but they were apparently being used through KA.jl:
From worker 13: histogram tests: Error During Test at /home/tim/Julia/depot/packages/KernelAbstractions/XhtMv/examples/histogram.jl:74
From worker 13: Got exception outside of a @test
From worker 13: InvalidIRError: compiling gpu_histogram_kernel!(KernelAbstractions.CompilerMetadata{KernelAbstractions.NDIteration.DynamicSize, KernelAbstractions.NDIteration.DynamicCheck, Nothing, CartesianIndices{1, Tuple{Base.OneTo{Int64}}}, KernelAbstractions.NDIteration.NDRange{1, KernelAbstractions.NDIteration.DynamicSize, KernelAbstractions.NDIteration.StaticSize{(256,)}, CartesianIndices{1, Tuple{Base.OneTo{Int64}}}, Nothing}}, CuDeviceVector{Int64, 1}, CuDeviceVector{Int64, 1}) in world 32481 resulted in invalid LLVM IR
From worker 13: Reason: unsupported use of an undefined name (use of 'atomic_pointermodify')
From worker 13: Stacktrace:
From worker 13: [1] getproperty
From worker 13: @ ./Base.jl:31
From worker 13: [2] modify!
From worker 13: @ ~/Julia/depot/packages/UnsafeAtomicsLLVM/i4GMj/src/internal.jl:18
From worker 13: [3] modify!
From worker 13: @ ~/Julia/depot/packages/Atomix/F9VIX/src/core.jl:33
From worker 13: [4] macro expansion
From worker 13: @ ~/Julia/depot/packages/KernelAbstractions/XhtMv/examples/histogram.jl:48
From worker 13: [5] gpu_histogram_kernel!
From worker 13: @ ~/Julia/depot/packages/KernelAbstractions/XhtMv/src/macros.jl:81
From worker 13: [6] gpu_histogram_kernel!
From worker 13: @ ./none:0
I'm not sure why we're already using Atomix.jl here, ref. #1790?
KernelAbstractions uses Atomix.jl since it is otherwise impossible to use atomic operations across backends.
I will update UnsafeAtomicsLLVM for LLVM 5.0, #1790 is to reduce the dependency there to actually implement them in CUDA.jl (I think I can do it in two steps).
Regarding the IO output, I don't remember why we didn't capture that., That should just be a quick PR to KA's testsuite.
Now that a KA.jl back-end is part of CUDA.jl and being tested on CI, I encountered a couple of issues:
I'm not sure why we're already using Atomix.jl here, ref. #1790?
cc. @vchuravy
The text was updated successfully, but these errors were encountered: