Applying GPLVM to dynanmic latent vectors with multivariate normal priors on latent variables but faces extremely negative logprior in ELBO #2421
Unanswered
yahoochen97
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi I saw an amazing toturial of training Gaussian Process Latent Variable Models (GPLVM) with SVI and I would like to adjust it to the dynamic latent variable setting. For the purpose of demonstration, let's say there are$n$ respondents answering the same set of $m$ questions over $T$ time periods, whose responses are collected into a 3-d response matrix $y$ with shape $n \times m \times T$ . Using the setup of the tutorial, it is equivalent to say we want to estimate a low latent respresentation $X$ of shape $n\times T$ from high dimensional observations $n\times (m*T)$ . As the rows in $X$ are dependent (via a GP or sth), I made an attempt in building GPLVM with i.i.d multivariate prior on the latent vectors by modifying the BGPLVM, MAPLatentVariable and VariationalLatentVariable classes.
My code works fine when$T$ is small. However, the ELBO objective went crazy when $T$ became slightly larger (i tried $T=6$ ). $T\times 1$ latent vectors, or is there any better approach for what I am doing besides working on modifying
log_likelihood
,kl_divergence
andadded_loss
in_ApproximateMarginalLogLikelihood.forwar()
are all fine butlog_prior
was extremely negative, possibly due to the sparsity of a multi-variate gaussian. So I wonder whether there is any roundabout if I persist on using a full multivariate prior on eachBGPLVM
? Any comment is appreciated.Beta Was this translation helpful? Give feedback.
All reactions