Edinburgh Research Explorer

Policy learning for time-bounded reachability in Continuous-Time Markov Decision Processes via doubly-stochastic gradient ascent

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Related Edinburgh Organisations

Open Access permissions

Open

Documents

http://link.springer.com/chapter/10.1007/978-3-319-43425-4_17
Original languageEnglish
Title of host publicationQuantitative Evaluation of Systems
Subtitle of host publication13th International Conference, QEST 2016, Quebec City, QC, Canada, August 23-25, 2016, Proceedings
PublisherSpringer International Publishing
Pages244-259
Number of pages16
ISBN (Electronic)978-3-319-43425-4
ISBN (Print) 978-3-319-43424-7
DOIs
StatePublished - 3 Aug 2016
Event13th International Conference on Quantitative Evaluation of SysTems - Quebec City, Canada
Duration: 23 Aug 201625 Aug 2016
http://www.qest.org/qest2016/

Publication series

NameLecture Notes in Computer Science (LNCS)
PublisherSpringer International Publishing
Volume9826
ISSN (Print)0302-9743

Conference

Conference13th International Conference on Quantitative Evaluation of SysTems
Abbreviated titleQEST 2016
CountryCanada
CityQuebec City
Period23/08/1625/08/16
Internet address

Abstract

Continuous-time Markov decision processes are an important class of models in a wide range of applications, ranging from cyber physical systems to synthetic biology. A central problem is how to devise a policy to control the system in order to maximise the probability of satisfying a set of temporal logic specifications. Here we present a novel approach based on statistical model checking and an unbiased estimation of a functional gradient in the space of possible policies. The statistical approach has several advantages over conventional approaches based on uniformisation, as it can also be applied when the model is replaced by a black box, and does not suffer from state-space explosion. The use of a stochastic gradient to guide our search considerably improves the efficiency of learning policies. We demonstrate the method on a proof-of principle non-linear population model, showing strong performance in a non-trivial task.

Event

13th International Conference on Quantitative Evaluation of SysTems

23/08/1625/08/16

Quebec City, Canada

Event: Conference

Download statistics

No data available

ID: 25478338