Abstract
We describe a simple and efficient approach to
learning structures of sparse high-dimensional
latent variable models. Standard algorithms either
learn structures of specific predefined forms,
or estimate sparse graphs in the data space ignoring
the possibility of the latent variables. In contrast,
our method learns rich dependencies and
allows for latent variables that may confound the
relations between the observations. We extend
the model to conditional mixtures with side information
and non-Gaussian marginal distributions
of the observations. We then show that
our model may be used for learning sparse latent
variable structures corresponding to multiple
unknown states, and for uncovering features
useful for explaining and predicting structural
changes. We apply the model to real-world financial
data with heavy-tailed marginals covering
the low- and high- market volatility periods
of 2005-2011. We show that our method tends to
give rise to significantly higher likelihoods of test
data than standard network learning methods exploiting
the sparsity assumption. We also demonstrate
that our approach may be practical for financial
stress-testing and visualization of dependencies
between financial instruments.
Original language | English |
---|---|
Title of host publication | Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics (AISTATS-12) |
Editors | Neil D. Lawrence, Mark A. Girolami |
Pages | 10-18 |
Number of pages | 9 |
Volume | 22 |
Publication status | Published - 2012 |