-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Symmetric matrices and convolution operators #19
Comments
Do you mean I don't think we currently have a low-level representation of a symmetric matrix that just saves
So you want to run this on cuDNN directly? I don't know about this but I don't think cuDNN would be able to exploit this representation on the |
This would be the full covariance, across channels included :) I thought LinearOperator used to some extent cuDNN or its sparse counterpart; since the convolution still needs to be performed I think it is quite amenable to GPU implementation "in general". |
So the operator you care about is
Your LinearOperator can use PyTorch commands that utilize cuDNN. For example, I would imagine that the |
Hi !
I'm interested in the library to reduce the memory and computationnal complexity of computing covariance matrices of a convolutional operation. The covariance is symmetric by construction, which would approximately divide the burden by half. The result may be consumed by other convolutionnal layers/other operations, or in some cases only the diagonal may be kept (which should in turn reduce the burden to square root of the full covariance).
Edit: it would actually be symmetric tensors with shape
channel*width*height*channel*width*height
Is it possible with the library at the moment ? should there be concerns for performance with respect to the cuDNN implementations of convolutions ?
A possible approach I thought of involved unfolding the input, which I'm not sure is fully supported by the library/is somewhat inefficient in regular Pytorch.
The text was updated successfully, but these errors were encountered: