-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Block matrix support #54
Comments
Thanks for the suggestion, @hughsalimbeni! @gpleiss, @jacobrgardner and I have talked in the past about expanding The
where operators are not assumed to be square (could just have a This would of course a major redesign of the whole library and so presumably out of scope for what you're trying to achieve here. But adding your suggestion could be a step on the way to a more general setup, and could inform / be absorbed in a larger rewrite down the road. So I'm happy to help review a PR for this. |
Looks like a great addition. The key question is what functions need to be implemented to make this a reality. From the library description, we must implement: I'm not sure what else makes sense. It seems like we might want |
🚀 Feature Request
Represent [TN, TM] tensors by TxT blocks of NxM lazy tensors. While block matrices are supported, the efficient representation is only when there is a diagonal structure over the T dimensions.
Motivation
Here is an example that linear_operator cannot deal with:
This calculation turns up in some multi-output GP models. It has a straightforward efficient implementation:
Currently, this calculation could be implemented inside linear_operator like this
Removing the
to_dense()
gives an error, however.Pitch
Add block linear operator class that can keep track of the [T, T] block structure, represented as T^2 lazy tensors of the same shape. Implement matrix multiplication between block matrices as the appropriate linear operators on the blocks.
As a work-around, I have written manual implementations of specific cases, such as above.
I'm willing to work on PR for this
Additional context
None
The text was updated successfully, but these errors were encountered: