Replies: 5 comments
-
No, the library is currently not wrapped or used in CUDA.jl. Neither is CUBLASLt. I don't have immediate plans to implement support, so feel free to take a stab at it. |
Beta Was this translation helpful? Give feedback.
-
Just curious, what does LT stands for? They both end with LT |
Beta Was this translation helpful? Give feedback.
-
Can you give me some direction for wrapping it up? I don’t mind to give a try |
Beta Was this translation helpful? Give feedback.
-
The first step would be to make sure that the library and headers are part of CUDA_full_jll, as built on Yggdrasil. If that's the case, you can generate headers using the scripts in https://github.com/JuliaGPU/CUDA.jl/tree/master/res/wrap. Once those are generated, you'll want to make sure that the library (libcusolverlt.so, or so) is a LibraryProduct that's exported by CUDA_Runtime_jll, although you could probably leave that step out and use a locally-discoverable library instead. Finally, you'll want to integrate calls to that library in the CUSOLVER.jl package so that its functionality is usable. At the least, that means creating a handle, but for maximal user-friendliness you'd want to write some high-level wrappers. All of the above has basically been done already for libcusolver, so it's often a game of copy-pasting and adapting things. |
Beta Was this translation helpful? Give feedback.
-
Low-level wrappers are provided now. |
Beta Was this translation helpful? Give feedback.
-
When searching around sparse matrix - dense matrix multiplication, I came across this slides and it mentions a new library dedicated for this type of operation. I know CUDA.jl could solve this SmDm multiplication with decent performance, but I am just wondering if CUSPARSELT is already used underneath. If not, do you have plan to support it, or does it worth to support it? Thanks!
Beta Was this translation helpful? Give feedback.
All reactions