You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using a KeOps kernel in GPyTorch, and making predictions (data larger than settings.min_preconditioning_size), the pivoted cholesky decomposition fails. This seems to be due to covar_func returning a pykeops LazyTensor rather than a LinearOperator, making to_dense() fail.
To reproduce
model is a SingleTaskGP with train_inputs of size [100, 2000, 6].
We use a KeOps kernel as laid out in the GPyTorch tutorials.
# this fails
preds = model(X) # X being shape [100,2000,6] in my case
** Stack trace/error message **
File "/fsx/home_dirs/fegt/BOSS/boss/acquisition.py", line 137, in forward
preds = model(X)
File "/nfs_home/users/fegt/.conda/envs/botorch/lib/python3.10/site-packages/gpytorch/models/exact_gp.py", line 333, in __call__
) = self.prediction_strategy.exact_prediction(full_mean, full_covar)
File "/nfs_home/users/fegt/.conda/envs/botorch/lib/python3.10/site-packages/gpytorch/models/exact_prediction_strategies.py", line 289, in exact_prediction
self.exact_predictive_mean(test_mean, test_train_covar),
File "/nfs_home/users/fegt/.conda/envs/botorch/lib/python3.10/site-packages/gpytorch/models/exact_prediction_strategies.py", line 306, in exact_predictive_mean
if len(self.mean_cache.shape) == 4:
File "/nfs_home/users/fegt/.conda/envs/botorch/lib/python3.10/site-packages/gpytorch/utils/memoize.py", line 59, in g
return _add_to_cache(self, cache_name, method(self, *args, **kwargs), *args, kwargs_pkl=kwargs_pkl)
File "/nfs_home/users/fegt/.conda/envs/botorch/lib/python3.10/site-packages/gpytorch/models/exact_prediction_strategies.py", line 256, in mean_cache
mean_cache = train_train_covar.evaluate_kernel().solve(train_labels_offset).squeeze(-1)
File "/nfs_home/users/fegt/.conda/envs/botorch/lib/python3.10/site-packages/linear_operator/operators/_linear_operator.py", line 2334, in solve
return func.apply(self.representation_tree(), False, right_tensor, *self.representation())
File "/nfs_home/users/fegt/.conda/envs/botorch/lib/python3.10/site-packages/torch/autograd/function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/nfs_home/users/fegt/.conda/envs/botorch/lib/python3.10/site-packages/linear_operator/functions/_solve.py", line 53, in forward
solves = _solve(linear_op, right_tensor)
File "/nfs_home/users/fegt/.conda/envs/botorch/lib/python3.10/site-packages/linear_operator/functions/_solve.py", line 20, in _solve
preconditioner = linear_op.detach()._solve_preconditioner()
File "/nfs_home/users/fegt/.conda/envs/botorch/lib/python3.10/site-packages/linear_operator/operators/_linear_operator.py", line 806, in _solve_preconditioner
base_precond, _, _ = self._preconditioner()
# Starting from here, we fail because we exceed settings.min_preconditioning_size
File "/nfs_home/users/fegt/.conda/envs/botorch/lib/python3.10/site-packages/linear_operator/operators/added_diag_linear_operator.py", line 126, in _preconditioner
self._piv_chol_self = self._linear_op.pivoted_cholesky(rank=max_iter)
File "/nfs_home/users/fegt/.conda/envs/botorch/lib/python3.10/site-packages/linear_operator/operators/_linear_operator.py", line 1965, in pivoted_cholesky
res, pivots = func(self.representation_tree(), rank, error_tol, *self.representation())
File "/nfs_home/users/fegt/.conda/envs/botorch/lib/python3.10/site-packages/torch/autograd/function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/nfs_home/users/fegt/.conda/envs/botorch/lib/python3.10/site-packages/linear_operator/functions/_pivoted_cholesky.py", line 24, in forward
matrix_diag = matrix._approx_diagonal()
File "/nfs_home/users/fegt/.conda/envs/botorch/lib/python3.10/site-packages/linear_operator/operators/constant_mul_linear_operator.py", line 74, in _approx_diagonal
res = self.base_linear_op._approx_diagonal()
File "/nfs_home/users/fegt/.conda/envs/botorch/lib/python3.10/site-packages/linear_operator/operators/_linear_operator.py", line 492, in _approx_diagonal
return self._diagonal()
File "/nfs_home/users/fegt/.conda/envs/botorch/lib/python3.10/site-packages/linear_operator/utils/memoize.py", line 59, in g
return _add_to_cache(self, cache_name, method(self, *args, **kwargs), *args, kwargs_pkl=kwargs_pkl)
File "/nfs_home/users/fegt/.conda/envs/botorch/lib/python3.10/site-packages/linear_operator/operators/kernel_linear_operator.py", line 233, in _diagonal
diag_mat = to_dense(self.covar_func(x1, x2, **tensor_params, **self.nontensor_params))
File "/nfs_home/users/fegt/.conda/envs/botorch/lib/python3.10/site-packages/linear_operator/operators/_linear_operator.py", line 2987, in to_dense
raise TypeError("object of class {} cannot be made into a Tensor".format(obj.__class__.__name__))
TypeError: object of class LazyTensor cannot be made into a Tensor
Expected Behavior
The LazyTensor should probably somehow be cast to a KernelLinearOperator
System information
Please complete the following information:
linear-operator==0.5.1
pykeops==2.1.2
gpytorch==1.11
torch==2.0.1
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered:
🐛 Bug
When using a KeOps kernel in GPyTorch, and making predictions (data larger than
settings.min_preconditioning_size
), the pivoted cholesky decomposition fails. This seems to be due tocovar_func
returning a pykeopsLazyTensor
rather than aLinearOperator
, makingto_dense()
fail.To reproduce
model
is aSingleTaskGP
withtrain_inputs
of size[100, 2000, 6]
.We use a KeOps kernel as laid out in the GPyTorch tutorials.
** Stack trace/error message **
Expected Behavior
The LazyTensor should probably somehow be cast to a KernelLinearOperator
System information
Please complete the following information:
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: