On the Latent Space of Flow-based Models

Mingtian Zhang, Yitong Sun, Steven McDonagh, Chen Zhang

Research output: Contribution to conferencePaperpeer-review

Abstract / Description of output

Flow-based generative models typically define a latent space with dimensionality identical to the observational space. In many problems, however, the data does not populate the full ambient data-space that they natively reside in, but rather inhabit a lower-dimensional manifold. In such scenarios, flow-based models are unable to represent data structures exactly as their density will always have support off the data manifold, potentially resulting in degradation of model performance. In addition, the requirement for equal latent and data space dimensionality can unnecessarily increase model complexity for contemporary flow models. Towards addressing these problems, we propose to learn a manifold prior that affords benefits to both the tasks of sample generation and representation quality. An auxiliary product of our approach is that we are able to identify the intrinsic dimension of the data distribution.
Original languageUndefined/Unknown
Publication statusPublished - 28 Sept 2020
EventNinth International Conference on Learning Representations 2021 - Virtual Conference
Duration: 4 May 20217 May 2021


ConferenceNinth International Conference on Learning Representations 2021
Abbreviated titleICLR 2021
CityVirtual Conference
Internet address

Cite this