On Contrastive Learning for Likelihood-free Inference

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Likelihood-free methods perform parameter inference in stochastic simulator models where evaluating the likelihood is intractable but sampling synthetic data is possible. One class of methods for this likelihood-free problem uses a classifier to distinguish between pairs of parameter-observation samples generated using the simulator and pairs sampled from some reference distribution, which implicitly learns a density ratio proportional to the likelihood. Another popular class of methods fits a conditional distribution to the parameter posterior directly, and a particular recent variant allows for the use of flexible neural density estimators for this task. In this work, we show that both of these approaches can be unified under a general contrastive learning scheme, and clarify how they should be run and compared.
Original languageEnglish
Title of host publicationProceedings of the 37th International Conference on Machine Learning
Number of pages15
Publication statusPublished - 18 Jul 2020
EventThirty-seventh International Conference on Machine Learning 2020 - Virtual conference
Duration: 13 Jul 202018 Jul 2020

Publication series

NameProceedings of Machine Learning Research
ISSN (Electronic)2640-3498


ConferenceThirty-seventh International Conference on Machine Learning 2020
Abbreviated titleICML 2020
CityVirtual conference
Internet address


Dive into the research topics of 'On Contrastive Learning for Likelihood-free Inference'. Together they form a unique fingerprint.

Cite this