Inducing Interpretable Representations with Variational Autoencoders

N. Siddharth, Brooks Paige, Alban Desmaison, Jan-Willem van de Meent, Frank Wood, Noah D. Goodman, Pushmeet Kohli, Philip H.S. Torr

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We develop a framework for incorporating structured graphical models in the encoders of variational autoencoders (VAEs) that allows us to induce interpretable representations through approximate variational inference. This allows us to both perform reasoning (e.g. classification) under the structural constraints of a given graphical model, and use deep generative models to deal with messy, high-dimensional domains where it is often difficult to model all the variation. Learning in this framework is carried out end-to-end with a variational objective, applying to both unsupervised and semi-supervised schemes.
Original languageEnglish
Title of host publicationInterpretable Machine Learning for Complex Systems
Subtitle of host publicationNIPS 2016 workshop proceedings
Number of pages6
Publication statusPublished - 8 Dec 2016
EventInterpretable Machine Learning for Complex Systems Workshop: @ NeurIPS 2016 - Barcelona, Spain
Duration: 8 Dec 20168 Dec 2016
https://nips.cc/Conferences/2016/Schedule?showEvent=6238

Workshop

WorkshopInterpretable Machine Learning for Complex Systems Workshop
CountrySpain
CityBarcelona
Period8/12/168/12/16
Internet address

Cite this