Abstract / Description of output
Recent AI research has emphasised the importance of learning disentangled representations of the explanatory factors behind data. Despite the growing interest in models which can learn such representations, visual inspection remains the standard evaluation metric. While various desiderata have been implied in recent definitions, it is currently unclear what exactly makes one disentangled representation better than another. In this work we propose a framework for the quantitative evaluation of disentangled representations when the ground-truth latent structure is available. Three criteria are explicitly defined and quantified to elucidate the quality of learnt representations and thus compare models on an equal basis. To illustrate the appropriateness of the framework, we employ it to compare quantitatively the representations learned by recent state-of-the-art models.
Original language | English |
---|---|
Title of host publication | Sixth International Conference on Learning Representations (ICLR 2018) |
Number of pages | 15 |
Publication status | E-pub ahead of print - 3 May 2018 |
Event | 6th International Conference on Learning Representations - Vancouver, Canada Duration: 30 Apr 2018 → 3 May 2018 https://iclr.cc/Conferences/2018 |
Conference
Conference | 6th International Conference on Learning Representations |
---|---|
Abbreviated title | ICLR 2018 |
Country/Territory | Canada |
City | Vancouver |
Period | 30/04/18 → 3/05/18 |
Internet address |