Probabilistic Tractable Models in Mixed Discrete-Continuous Domains

Andreas Bueff, Stefanie Speichert, Vaishak Belle

Research output: Contribution to journalArticlepeer-review


We study the problem of the unsupervised learning of graphical models in mixed discrete-continuous domains. The problem of unsupervised learning of such models in discrete domains alone is notoriously challenging, compounded by the fact that inference is computationally demanding. The situation is generally believed to be significantly worse in discrete-continuous domains: estimating the unknown probability distribution of given samples is often limited in practice to a handful of parametric forms, and in addition to that, computing conditional queries needs to carefully handle low-probability regions in safety-critical applications. In recent years, the regime of tractable learning has emerged, which attempts to learn a graphical model that permits efficient inference. Most of the results in this regime are based on arithmetic circuits, for which inference is linear in the size of the obtained circuit. In this work, we show how, with minimal modifications, such regimes can be generalized by leveraging efficient density estimation schemes based on piecewise polynomial approximations. Our framework is realized on a recent computational abstraction that permits efficient inference for a range of queries in the underlying language. Our empirical results show that our approach is effective, and allows a study of the trade-off between the granularity of the learned model and its predictive power.
Original languageEnglish
Pages (from-to)228–260
Number of pages33
JournalData Intelligence
Issue number2
Early online date16 Dec 2020
Publication statusPublished - 2 Jun 2021


  • Graphical models
  • Tractable inference
  • Hybrid domains
  • Weighted model integration


Dive into the research topics of 'Probabilistic Tractable Models in Mixed Discrete-Continuous Domains'. Together they form a unique fingerprint.

Cite this