Bayesian models for classification provide a measure of epistemic uncertainty over model parameters. Using the mutual information (MI) between the predictive distribution and the model parameters, one can construct a heuristic approximation to the density of the data distribution—seeing practical utility from active learning to the detection of adversarial examples. Here we ask to what extent classifier uncertainties can be used as signal for learning or refining generative models. Our approach is simple: given training data and a generative model, we construct a density-ratio estimator using 1D empirical distributions of the MI from a pre-trained classifier. For a given generative model, this can be used to define Monte-Carlo sampling algorithms targetting the true data density. In a more challenging image domain, we use the estimator to define a novel data augmentation scheme for fine-tuning variational autoencoders (VAEs), improving quality of generations.
|Number of pages||9|
|Publication status||Published - 17 Jul 2020|
|Event||ICML 2020 Workshop on Uncertainty & Robustness in Deep Learning - Virtual workshop|
Duration: 17 Jul 2020 → 17 Jul 2020
|Workshop||ICML 2020 Workshop on Uncertainty & Robustness in Deep Learning|
|Abbreviated title||ICML UDL 2020|
|Period||17/07/20 → 17/07/20|