Disentangling Disentanglement

Emile Mathieu, Tom Rainforth, N. Siddharth, Yee Whye Teh

Research output: Contribution to conferencePaperpeer-review

Abstract / Description of output

We develop a generalised notion of disentanglement in variational auto-encoders (VAEs) by casting it as a decomposition of the latent representation, characterised by i) enforcing an appropriate level of overlap in the latent encodings of the data, and ii) regularisation of the average encoding to a desired structure, represented through the prior. We motivate this by showing that a) the β-VAE disentangles purely through regularisation of the overlap in latent encodings, and b) disentanglement, as independence between latents, can be cast as a regularisation of the aggregate posterior to a prior with specific characteristics. We validate this characterisation by showing that simple manipulations of these factors, such as using rotationally variant priors, can help improve disentanglement, and discuss how this characterisation provides a more general framework to incorporate notions of decomposition beyond just independence between the latents.
Original languageEnglish
Number of pages11
Publication statusPublished - 7 Dec 2018
EventThird workshop on Bayesian Deep Learning 2018 - Montréal, Canada
Duration: 7 Dec 20187 Dec 2018


ConferenceThird workshop on Bayesian Deep Learning 2018
Abbreviated titleNIPS 2018 Workshop
Internet address


Dive into the research topics of 'Disentangling Disentanglement'. Together they form a unique fingerprint.

Cite this