Approximating Markov Processes by Averaging

Philippe Chaput, Vincent Danos, Prakash Panangaden, Gordon Plotkin

Research output: Contribution to journalArticlepeer-review

Abstract

Normally, one thinks of probabilistic transition systems as taking an initial probability distribution over the state space into a new probability distribution representing the system after a transition. We, however, take a dual view of Markov processes as transformers of bounded measurable functions. This is very much in the same spirit as a “predicate-transformer” view, which is dual to the state-transformer view of transition systems. We redevelop the theory of labelled Markov processes from this viewpoint; in particular, we explore approximation theory. We obtain three main results.

(i) It is possible to define bisimulation on general measure spaces and show that it is an equivalence relation. The logical characterization of bisimulation can be done straightforwardly and generally.

(ii) A new and flexible approach to approximation based on averaging can be given. This vastly generalizes and streamlines the idea of using conditional expectations to compute approximations.

(iii) We show that there is a minimal process bisimulation-equivalent to a given process, and this minimal process is obtained as the limit of the finite approximants.
Original languageEnglish
Article number5
Number of pages45
JournalJournal of the ACM
Volume61
Issue number1
DOIs
Publication statusPublished - 1 Jan 2014

Keywords / Materials (for Non-textual outputs)

  • Markov operators
  • Markov processes
  • approximation
  • duality
  • bisimulation
  • modal logic

Fingerprint

Dive into the research topics of 'Approximating Markov Processes by Averaging'. Together they form a unique fingerprint.

Cite this