Replies: 3 comments 3 replies
-
Yeah you could do this, and just "putting a sparse matrix into DenseLinearOperator" would probably be the easiest thing to try. I'm not super familiar with the current limitations / pitfalls of the torch sparse tensors though, so there may be more to this. In an ideal world, since |
Beta Was this translation helpful? Give feedback.
-
Thank you for your answers. I tried some initial things, and e.g. passing a sparse CSR tensor into I already had to circumvent some limitations of CSR tensors:
|
Beta Was this translation helpful? Give feedback.
-
Ok, so I ended up concluding the following:
In the end, I ended up coding a CUDA kernel that follows the KeOps approach. This is quite fast, but also highly specialized, so I think it is of limited use to other people. However, the general approach of writing a custom CUDA Matmul kernel is quite appealing if performance is a primary concern and the covariance matrix becomes large. |
Beta Was this translation helpful? Give feedback.
-
In the BMMM paper and on the docs of the
linear-operators
package you already point out that your method is extensible to sparse matrices, as it only requires MVM-operations. I would like to investigate extendinglinear-operators
with a newSparseLinearOperator
which just contains a sparse torch matrix and delegates all features to existing torch implementations. What would be the complexity of implementing this?DenseLinearOperator
?Beta Was this translation helpful? Give feedback.
All reactions