Importance Sampling Policy Evaluation with an Estimated Behavior Policy

Josiah Hanna, Scott Niekum, Peter Stone

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We consider the problem of off-policy evaluation in Markov decision processes. Off-policy evaluation is the task of evaluating the expected return of one policy with data generated by a different, behavior policy. Importance sampling is a technique for off-policy evaluation that re-weights off-policy returns to account for differences in the likelihood of the returns between the two policies. In this paper, we study importance sampling with an estimated behavior policy where the behavior policy estimate comes from the same set of data used to compute the importance sampling estimate. We find that this estimator often lowers the mean squared error of off-policy evaluation compared to importance sampling with the true behavior policy or using a behavior policy that is estimated from a separate data set. Intuitively, estimating the behavior policy in this way corrects for error due to sampling in the action-space. Our empirical results also extend to other popular variants of importance sampling and show that estimating a non-Markovian behavior policy can further lower large-sample mean squared error even when the true behavior policy is Markovian.
Original languageEnglish
Title of host publicationProceedings of the 36th International Conference on Machine Learning
EditorsKamalika Chaudhuri, Ruslan Salakhutdinov
Place of PublicationLong Beach, California, USA
PublisherPMLR
Pages2605-2613
Number of pages9
Publication statusPublished - 30 Sep 2019
EventThirty-sixth International Conference on Machine Learning - Long Beach Convention Center, Long Beach, United States
Duration: 9 Jun 201915 Jun 2019
Conference number: 36
https://icml.cc/Conferences/2019

Publication series

NameProceedings of Machine Learning Research
PublisherPMLR
Volume97
ISSN (Electronic)2640-3498

Conference

ConferenceThirty-sixth International Conference on Machine Learning
Abbreviated titleICML 2019
Country/TerritoryUnited States
CityLong Beach
Period9/06/1915/06/19
Internet address

Fingerprint

Dive into the research topics of 'Importance Sampling Policy Evaluation with an Estimated Behavior Policy'. Together they form a unique fingerprint.

Cite this