We develop a generalised notion of disentanglement in variational auto-encoders (VAEs) by casting it as a decomposition of the latent representation, characterised by i) enforcing an appropriate level of overlap in the latent encodings of the data, and ii) regularisation of the average encoding to a desired structure, represented through the prior. We motivate this by showing that a) the β-VAE disentangles purely through regularisation of the overlap in latent encodings, and b) disentanglement, as independence between latents, can be cast as a regularisation of the aggregate posterior to a prior with specific characteristics. We validate this characterisation by showing that simple manipulations of these factors, such as using rotationally variant priors, can help improve disentanglement, and discuss how this characterisation provides a more general framework to incorporate notions of decomposition beyond just independence between the latents.
|Number of pages||11|
|Publication status||Published - 7 Dec 2018|
|Event||Third workshop on Bayesian Deep Learning 2018 - Montréal, Canada|
Duration: 7 Dec 2018 → 7 Dec 2018
|Conference||Third workshop on Bayesian Deep Learning 2018|
|Abbreviated title||NIPS 2018 Workshop|
|Period||7/12/18 → 7/12/18|