Scalable Extreme Deconvolution

Research output: Contribution to conferencePosterpeer-review

Abstract

The Extreme Deconvolution method fits a probability density to a dataset where each observation has Gaussian noise added with a known sample-specific covariance, originally intended for use with astronomical datasets. The existing fitting method is batch EM, which would not normally be applied to large datasets such as the Gaia catalog containing noisy observations of a billion stars. We propose two minibatch variants of extreme deconvolution, based on an online variation of the EM algorithm, and direct gradient-based optimisation of the log-likelihood, both of which can run on GPUs. We demonstrate that these methods provide faster fitting, whilst being able to scale to much larger models for use with larger datasets.
Original languageEnglish
Number of pages7
Publication statusPublished - 14 Dec 2019
EventSecond Workshop on Machine Learning and the Physical Sciences (NeurIPS 2019) - Vancouver, Canada
Duration: 14 Dec 201914 Dec 2019
https://ml4physicalsciences.github.io/

Conference

ConferenceSecond Workshop on Machine Learning and the Physical Sciences (NeurIPS 2019)
Country/TerritoryCanada
CityVancouver
Period14/12/1914/12/19
Internet address

Fingerprint

Dive into the research topics of 'Scalable Extreme Deconvolution'. Together they form a unique fingerprint.

Cite this