Abstract / Description of output
Conditional sampling of variational autoencoders (VAEs) is needed in various applications, such as missing data imputation, but is computationally intractable. A principled choice for asymptotically exact conditional sampling is Metropolis-within-Gibbs (MWG). However, we observe that the tendency of VAEs to learn a structured latent space, a commonly desired property, can cause the MWG sampler to get “stuck” far from the target distribution. This paper mitigates the limitations of MWG: we systematically outline the pitfalls in the context of VAEs, propose two original methods that address these pitfalls, and demonstrate an improved performance of the proposed methods on a set of sampling tasks.
Original language | English |
---|---|
Pages (from-to) | 1-35 |
Number of pages | 35 |
Journal | Transactions on Machine Learning Research |
Volume | 2023 |
Issue number | 11 |
Publication status | Published - 8 Nov 2023 |