Planning in Discrete and Continuous Markov Decision Processes by Probabilistic Programming

Davide Nitti, Vaishak Belle, Luc De Raedt

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Real-world planning problems frequently involve mixtures of continuous and discrete state variables and actions, and are formulated in environments with an unknown number of objects. In recent years, probabilistic programming has emerged as a natural approach to capture and characterize such complex probability distributions with general-purpose inference methods. While it is known that a probabilistic programming language can be easily extended to represent Markov Decision Processes (MDPs) for planning tasks, solving such tasks is challenging. Building on related efforts in reinforcement learning, we introduce a conceptually simple but powerful planning algorithm for MDPs realized as a probabilistic program. This planner constructs approximations to the optimal policy by importance sampling, while exploiting the knowledge of the MDP model. In our empirical evaluations, we show that this approach has wide applicability on domains ranging from strictly discrete to strictly continuous to hybrid ones, handles intricacies such as unknown objects, and is argued to be competitive given its generality.
Original languageEnglish
Title of host publicationMachine Learning and Knowledge Discovery in Databases
Subtitle of host publicationEuropean Conference, ECML PKDD 2015, Porto, Portugal, September 7-11, 2015, Proceedings, Part II
PublisherSpringer International Publishing
Pages327-342
Number of pages16
ISBN (Electronic)978-3-319-23524-0
ISBN (Print)978-3-319-23525-7
DOIs
Publication statusPublished - Aug 2015

Publication series

NameLecture Notes in Computer Science
PublisherSpringer International Publishing
Volume9285
ISSN (Print)0302-9743

Fingerprint Dive into the research topics of 'Planning in Discrete and Continuous Markov Decision Processes by Probabilistic Programming'. Together they form a unique fingerprint.

Cite this