Inducing Interpretable Representations with Variational Autoencoders

N. Siddharth, Brooks Paige, Alban Desmaison, Jan-Willem van de Meent, Frank Wood, Noah D. Goodman, Pushmeet Kohli, Philip H.S. Torr

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

We develop a framework for incorporating structured graphical models in the encoders of variational autoencoders (VAEs) that allows us to induce interpretable representations through approximate variational inference. This allows us to both perform reasoning (e.g. classification) under the structural constraints of a given graphical model, and use deep generative models to deal with messy, high-dimensional domains where it is often difficult to model all the variation. Learning in this framework is carried out end-to-end with a variational objective, applying to both unsupervised and semi-supervised schemes.
Original languageEnglish
Title of host publicationInterpretable Machine Learning for Complex Systems
Subtitle of host publicationNIPS 2016 workshop proceedings
Number of pages6
Publication statusPublished - 8 Dec 2016
EventInterpretable Machine Learning for Complex Systems Workshop: @ NeurIPS 2016 - Barcelona, Spain
Duration: 8 Dec 20168 Dec 2016


WorkshopInterpretable Machine Learning for Complex Systems Workshop
Internet address


Dive into the research topics of 'Inducing Interpretable Representations with Variational Autoencoders'. Together they form a unique fingerprint.

Cite this