Transience in Countable MDPs

Stefan Kiefer, Richard Mayr, Mahsa Shirmohammadi, Patrick Totzke

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

The Transience objective is not to visit any state infinitely often. While this is not possible in finite Markov Decision Process (MDP), it can be satisfied in countably infinite ones, e.g., if the transition graph is acyclic. We prove the following fundamental properties of Transience in countably infinite MDPs.

1. There exist uniformly ε-optimal MD strategies (memoryless deterministic) for Transience, even in infinitely branching MDPs.
2. Optimal strategies for Transience need not exist, even if the MDP is finitely branching. However, if an optimal strategy exists then there is also an optimal MD strategy.
3. If an MDP is universally transient (i.e., almost surely transient under all strategies) then many other objectives have a lower strategy complexity than in general MDPs. E.g., ε-optimal strategies for Safety and co-Büchi and optimal strategies for {0,1,2}-Parity (where they exist) can be chosen MD, even if the MDP is infinitely branching.
Original languageEnglish
Title of host publication32nd International Conference on Concurrency Theory (CONCUR 2021)
EditorsSerge Haddad, Daniele Varacca
PublisherSchloss Dagstuhl - Leibniz-Zentrum für Informatik
Number of pages15
ISBN (Electronic)978-3-95977-203-7
Publication statusPublished - 13 Aug 2021
Event32nd International Conference on Concurrency Theory - Online, Paris, France
Duration: 23 Aug 202127 Aug 2021

Publication series

NameLIPIcs - Leibniz International Proceedings in Informatics
ISSN (Electronic)1868-8969


Conference32nd International Conference on Concurrency Theory
Abbreviated titleCONCUR 2021
Internet address

Keywords / Materials (for Non-textual outputs)

  • Markov decision processes
  • Parity
  • Transience


Dive into the research topics of 'Transience in Countable MDPs'. Together they form a unique fingerprint.

Cite this