Abstract / Description of output
The latent space of normalizing flows must be of the same dimensionality as their output space. This constraint presents a problem if we want to learn low-dimensional, semantically meaningful representations. Recent work has provided compact representations by fitting flows constrained to manifolds, but hasn’t defined a density off that manifold. In this work we consider flows with full support in data space, but with ordered latent variables. Like in PCA, the leading latent dimensions define a sequence of manifolds that lie close to the data. We note a trade-off between the flow likelihood and the quality of the ordering, depending on the parameterization of the flow.
Original language | English |
---|---|
Number of pages | 6 |
Publication status | Published - 18 Jul 2020 |
Event | ICML Workshop on Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models 2020 - Virtual workshop Duration: 13 Jul 2020 → 13 Jul 2020 https://invertibleworkshop.github.io/index.html |
Workshop
Workshop | ICML Workshop on Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models 2020 |
---|---|
Abbreviated title | INNF+ 2020 |
City | Virtual workshop |
Period | 13/07/20 → 13/07/20 |
Internet address |