A Framework for the Quantitative Evaluation of Disentangled Representations

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Recent AI research has emphasised the importance of learning disentangled representations of the explanatory factors behind data. Despite the growing interest in models which can learn such representations, visual inspection remains the standard evaluation metric. While various desiderata have been implied in recent definitions, it is currently unclear what exactly makes one disentangled representation better than another. In this work we propose a framework for the quantitative evaluation of disentangled representations when the ground-truth latent structure is available. Three criteria are explicitly defined and quantified to elucidate the quality of learnt representations and thus compare models on an equal basis. To illustrate the appropriateness of the framework, we employ it to compare quantitatively the representations learned by recent state-of-the-art models.
Original languageEnglish
Title of host publicationSixth International Conference on Learning Representations (ICLR 2018)
Number of pages15
Publication statusE-pub ahead of print - 3 May 2018
Event6th International Conference on Learning Representations - Vancouver, Canada
Duration: 30 Apr 20183 May 2018
https://iclr.cc/Conferences/2018

Conference

Conference6th International Conference on Learning Representations
Abbreviated titleICLR 2018
Country/TerritoryCanada
CityVancouver
Period30/04/183/05/18
Internet address

Fingerprint

Dive into the research topics of 'A Framework for the Quantitative Evaluation of Disentangled Representations'. Together they form a unique fingerprint.

Cite this