TY - JOUR

T1 - Approximating the termination value of one-counter MDPs and stochastic games

AU - Brazdil, Tomas

AU - Brozek, Vaclav

AU - Etessami, Kousha

AU - Kucera, Antonin

PY - 2013

Y1 - 2013

N2 - One-counter MDPs (OC-MDPs) and one-counter simple stochastic games (OC-SSGs) are 1-player, and 2-player turn-based zero-sum, stochastic games played on the transition graph of classic one-counter automata (equivalently, pushdown automata with a 1-letter stack alphabet). A key objective for the analysis and verification of these games is the termination objective, where the players aim to maximize (minimize, respectively) the probability of hitting counter value 0, starting at a given control state and given counter value. Recently, we studied qualitative decision problems ("is the optimal termination value equal to 1?") for OC-MDPs (and OC-SSGs) and showed them to be decidable in polynomial time (in NP ∩ coNP , respectively). However, quantitative decision and approximation problems ("is the optimal termination value at least p", or "approximate the termination value within ε") are far more challenging. This is so in part because optimal strategies may not exist, and because even when they do exist they can have a highly non-trivial structure. It thus remained open even whether any of these quantitative termination problems are computable. In this paper we show that all quantitative approximation problems for the termination value for OC-MDPs and OC-SSGs are computable. Specifically, given an OC-SSG, and given ε > 0, we can compute a value v that approximates the value of the OC-SSG termination game within additive error ε, and furthermore we can compute ε-optimal strategies for both players in the game. A key ingredient in our proofs is a subtle martingale, derived from solving certain linear programs that we can associate with a maximizing OC-MDP. An application of Azumaʼs inequality on these martingales yields a computable bound for the "wealth" at which a "rich personʼs strategy" becomes ε-optimal for OC-MDPs.

AB - One-counter MDPs (OC-MDPs) and one-counter simple stochastic games (OC-SSGs) are 1-player, and 2-player turn-based zero-sum, stochastic games played on the transition graph of classic one-counter automata (equivalently, pushdown automata with a 1-letter stack alphabet). A key objective for the analysis and verification of these games is the termination objective, where the players aim to maximize (minimize, respectively) the probability of hitting counter value 0, starting at a given control state and given counter value. Recently, we studied qualitative decision problems ("is the optimal termination value equal to 1?") for OC-MDPs (and OC-SSGs) and showed them to be decidable in polynomial time (in NP ∩ coNP , respectively). However, quantitative decision and approximation problems ("is the optimal termination value at least p", or "approximate the termination value within ε") are far more challenging. This is so in part because optimal strategies may not exist, and because even when they do exist they can have a highly non-trivial structure. It thus remained open even whether any of these quantitative termination problems are computable. In this paper we show that all quantitative approximation problems for the termination value for OC-MDPs and OC-SSGs are computable. Specifically, given an OC-SSG, and given ε > 0, we can compute a value v that approximates the value of the OC-SSG termination game within additive error ε, and furthermore we can compute ε-optimal strategies for both players in the game. A key ingredient in our proofs is a subtle martingale, derived from solving certain linear programs that we can associate with a maximizing OC-MDP. An application of Azumaʼs inequality on these martingales yields a computable bound for the "wealth" at which a "rich personʼs strategy" becomes ε-optimal for OC-MDPs.

U2 - 10.1016/j.ic.2012.01.008

DO - 10.1016/j.ic.2012.01.008

M3 - Article

VL - 222

SP - 121

EP - 138

JO - Information and Computation

JF - Information and Computation

SN - 0890-5401

ER -