Abstract / Description of output
The Indian buffet process (IBP) provides a principled prior distribution for inferring the number of latent features in a dataset. Traditionally, inference for these models is slow when applied to large datasets, which motivates the use of amortized neural inference methods. However, previous works on variational inference for these models require the use of a truncated approximation, in which the maximum number of features is predetermined. To address this problem, we present a new dynamic variational posterior by introducing auxiliary variables to the stick-breaking construction of IBP. We describe how to estimate the evidence lower bound, which contains summations of infinite terms, using Russian roulette sampling.
Original language | English |
---|---|
Number of pages | 11 |
Publication status | Published - 2018 |
Event | Bayesian Nonparametrics workshop at the thirty-second Conference on Neural Information Processing Systems - Montreal, Canada Duration: 7 Dec 2018 → 7 Dec 2018 https://sites.google.com/view/nipsbnp2018/ |
Workshop
Workshop | Bayesian Nonparametrics workshop at the thirty-second Conference on Neural Information Processing Systems |
---|---|
Abbreviated title | BNP@NIPS 2018 |
Country/Territory | Canada |
City | Montreal |
Period | 7/12/18 → 7/12/18 |
Internet address |