Weighted Model Counting With Function Symbols

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Probabilistic relational languages lift the syntax of relational logic for the specification of large-scale probabilistic graphical models, often admitting concise descriptions for interacting random variables over classes, hierarchies and constraints. The emergence of weighted model counting as an effective and general approach to probabilistic inference has further allowed practitioners to reason about heterogeneous representations, such as Markov logic networks and ProbLog programs, by encoding them as a logical theory. However, much of this work has been limited to an essentially propositional setting: the logical model is understood in terms of ground formulas over a fixed and finite domain; no infinite domains, and certainly no function symbols (other than constants). On the one hand, this is not surprising, because such features are very problematic from a decidability viewpoint, but on the other, they turn out to be very attractive from the point of view of machine learning applications when there is uncertainty about the existence and identity of objects. In this paper, we reconsider the problem of probabilistic reasoning in a logical language with function symbols, and establish some key results that permit effective algorithms.
Original languageEnglish
Title of host publicationThe Conference on Uncertainty in Artificial Intelligence (UAI 2017)
Number of pages10
Publication statusPublished - 15 Aug 2017
EventConference on Uncertainty in Artificial Intelligence - Sydney, Australia
Duration: 11 Aug 201715 Aug 2017
http://www.auai.org/uai2017/index.php

Conference

ConferenceConference on Uncertainty in Artificial Intelligence
Abbreviated titleUAI 2017
CountryAustralia
CitySydney
Period11/08/1715/08/17
Internet address

Fingerprint Dive into the research topics of 'Weighted Model Counting With Function Symbols'. Together they form a unique fingerprint.

Cite this