Weighted model counting (WMC) has recently emerged as an effective and general approach to probabilistic inference, offering a computational framework for encoding a variety of formalisms, such as factor graphs and Bayesian networks. The advent of large-scale probabilistic knowledge bases has generated further interest in relational probabilistic representations, obtained by according weights to first-order formulas, whose semantics is given in terms of the ground theory, and solved by WMC. A fundamental limitation is that the domain of quantification, by construction and design, is assumed to be finite, which is at odds with areas such as vision and language understanding, where the existence of objects must be inferred from raw data. Dropping the finite-domain assumption has been known to improve the expressiveness of a first-order language for open-universe purposes, but these languages, so far, have eluded WMC approaches. In this paper, we revisit relational probabilistic models over an infinite domain, and establish a number of results that permit effective algorithms. We demonstrate this language on a number examples, including a parameterized version of Pearl’s Burglary-Earthquake-Alarm Bayesian network.
|Title of host publication||Proceedings of The Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17)|
|Number of pages||8|
|Publication status||Published - 12 Feb 2017|
|Event||Thirty-First AAAI Conference on Artificial Intelligence - San Francisco, United States|
Duration: 4 Feb 2017 → 9 Feb 2017
|Conference||Thirty-First AAAI Conference on Artificial Intelligence|
|Period||4/02/17 → 9/02/17|