Abstract / Description of output
Implicit stochastic models, where the data-generation distribution is intractable but sampling is possible, are ubiquitous in the natural sciences. The models typically have free parameters that need to be inferred from data collected in scientific experiments. A fundamental question is how to design the experiments so that the collected data are most useful. The field of Bayesian experimental design advocates that, ideally, we should choose designs that maximise the mutual information (MI) between the data and the parameters. For implicit models, however, this approach is severely hampered by the high computational cost of computing posteriors and maximising MI, in particular when we have more than a handful of design variables to optimise. In this paper, we propose a new approach to Bayesian experimental design for implicit models that leverages recent advances in neural MI estimation to deal with these issues. We show that training a neural network to maximise a lower bound on MI allows us to jointly determine the optimal design and the posterior. Simulation studies illustrate that this gracefully extends Bayesian experimental design for implicit models to higher design dimensions.
Original language | English |
---|---|
Title of host publication | Proceedings of the 37th International Conference on Machine Learning (ICML) 2020 |
Publisher | PMLR |
Pages | 5316-5326 |
Number of pages | 11 |
Publication status | Published - 18 Jul 2020 |
Event | Thirty-seventh International Conference on Machine Learning 2020 - Virtual conference Duration: 13 Jul 2020 → 18 Jul 2020 https://icml.cc/ |
Publication series
Name | Proceedings of Machine Learning Research |
---|---|
Volume | 119 |
ISSN (Electronic) | 2640-3498 |
Conference
Conference | Thirty-seventh International Conference on Machine Learning 2020 |
---|---|
Abbreviated title | ICML 2020 |
City | Virtual conference |
Period | 13/07/20 → 18/07/20 |
Internet address |