Edinburgh Research Explorer

Inverting Supervised Representations with Autoregressive Neural Density Models

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Related Edinburgh Organisations

Open Access permissions

Open

Documents

http://proceedings.mlr.press/v89/nash19a.html
Original languageEnglish
Title of host publicationProceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics
EditorsNeil Lawrence, Mark Reid
PublisherPMLR
Number of pages10
Volume89
Publication statusE-pub ahead of print - 18 Apr 2019
Event22nd International Conference on Artificial Intelligence and Statistics - Naha, Japan
Duration: 16 Apr 201918 Apr 2019
https://www.aistats.org/

Publication series

NameProceedings of Machine Learning Research
PublisherPMLR

Conference

Conference22nd International Conference on Artificial Intelligence and Statistics
Abbreviated titleAISTATS 2019
CountryJapan
CityNaha
Period16/04/1918/04/19
Internet address

Abstract

We present a method for feature interpretation that makes use of recent advances in autoregressive density estimation models to invert model representations. We train generative inversion models to express a distribution over input features conditioned on intermediate model representations. Insights into the invariances learned by supervised models can be gained by viewing samples from these inversion models. In addition, we can use these inversion models to estimate the mutual information between a model’s inputs and its intermediate representations, thus quantifying the amount of information preserved by the  network at different stages. Using this method we examine the types of information preserved at different layers of convolutional neural networks, and explore the invariances induced by different architectural choices.  Finally we show that the mutual information between inputs and network layers initially increases and then decreases over the course of training, supporting recent work by Shwartz-Ziv and Tishby (2017) on the  information bottleneck theory of deep learning.

Event

22nd International Conference on Artificial Intelligence and Statistics

16/04/1918/04/19

Naha, Japan

Event: Conference

Download statistics

No data available

ID: 80669661