Learning Generative Models from Classifier Uncertainties

N. Siddharth, Brooks Paige

Research output: Contribution to conferencePaperpeer-review

Abstract / Description of output

Bayesian models for classification provide a measure of epistemic uncertainty over model parameters. Using the mutual information (MI) between the predictive distribution and the model parameters, one can construct a heuristic approximation to the density of the data distribution—seeing practical utility from active learning to the detection of adversarial examples. Here we ask to what extent classifier uncertainties can be used as signal for learning or refining generative models. Our approach is simple: given training data and a generative model, we construct a density-ratio estimator using 1D empirical distributions of the MI from a pre-trained classifier. For a given generative model, this can be used to define Monte-Carlo sampling algorithms targetting the true data density. In a more challenging image domain, we use the estimator to define a novel data augmentation scheme for fine-tuning variational autoencoders (VAEs), improving quality of generations.
Original languageEnglish
Number of pages9
Publication statusPublished - 17 Jul 2020
EventICML 2020 Workshop on Uncertainty & Robustness in Deep Learning - Virtual workshop
Duration: 17 Jul 202017 Jul 2020
https://sites.google.com/view/udlworkshop2020/home?authuser=0

Workshop

WorkshopICML 2020 Workshop on Uncertainty & Robustness in Deep Learning
Abbreviated titleICML UDL 2020
CityVirtual workshop
Period17/07/2017/07/20
Internet address

Fingerprint

Dive into the research topics of 'Learning Generative Models from Classifier Uncertainties'. Together they form a unique fingerprint.

Cite this