Approximating Markov Processes by Averaging

Philippe Chaput, Vincent Danos, Prakash Panangaden, Gordon Plotkin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We take a dual view of Markov processes – advocated by Kozen – as transformers of bounded measurable functions. We redevelop the theory of labelled Markov processes from this view point, in particular we explore approximation theory. We obtain three main results:

(i) It is possible to define bisimulation on general measure spaces and show that it is an equivalence relation. The logical characterization of bisimulation can be done straightforwardly and generally. (ii) A new and flexible approach to approximation based on averaging can be given. This vastly generalizes and streamlines the idea of using conditional expectations to compute approximation. (iii) It is possible to show that there is a minimal bisimulation equivalent to a process obtained as the limit of the finite approximants.
Original languageEnglish
Title of host publicationAutomata, Languages and Programming
Subtitle of host publication36th Internatilonal Collogquium, ICALP 2009, Rhodes, greece, July 5-12, 2009, Proceedings, Part II
EditorsSusanne Albers, Alberto Marchetti-Spaccamela, Yossi Matias, Sotiris Nikoletseas, Wolfgang Thomas
PublisherSpringer
Pages127-138
Number of pages12
Volume5556
ISBN (Electronic)978-3-642-02930-1
ISBN (Print)978-3-642-02929-5
DOIs
Publication statusPublished - 2009

Publication series

NameLecture Notes in Computer Science
PublisherSpringer Berlin Heidelberg

Fingerprint

Dive into the research topics of 'Approximating Markov Processes by Averaging'. Together they form a unique fingerprint.

Cite this