Different results during training and testing for an approximate GP model #2393
Unanswered
pengzhi1998
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi, I'm having an issue of different evaluation results during training and testing.
Here is the defined GPModel:
I used:
model.load_state_dict(checkpoint['model_state_dict'])
to load the model and usedpreds = model(x_batch)
for prediction, and found all the model's parameters are the same in training and testing (which means the model has been successfully saved and loaded). And I checked the inputx
inforward
function and the value ofmean_x
andcovar_x.evaluate()
but they are all the same between training and testing scenarios. But strangely, the outputs offorward
are different.What might be the problem which leads to different results? Would
gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
lead to similar issues?But during training, I found this warning:
NumericalWarning: Negative variance values detected. This is likely due to numerical instabilities. Rounding negative variances up to 1e-10.
, would this be the reason to the problem?(Update: I checked this, when there is no warning of negative variances during training, the distribution's values are still different during tests between training and testing. This is so wired problem.) Or, is it a normal phenomenon that the performance during testing would be worse than evaluations during training for the variational model (when using the same test data and both are in evaluation mode)?
Look forward to your reply and thank you for your time and help!
Beta Was this translation helpful? Give feedback.
All reactions