Data-Efficient Policy Evaluation Through Behavior Policy Search

Josiah P. Hanna, Philip S. Thomas, Peter Stone, Scott Niekum

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We consider the task of evaluating a policy for a Markov decision process (MDP). The standard unbiased technique for evaluating a policy is to deploy the policy and observe its performance. We show that the data collected from deploying a different policy, commonly called the behavior policy, can be used to produce unbiased estimates with lower mean squared error than this standard technique. We derive an analytic expression for the optimal behavior policy — the behavior policy that minimizes the mean squared error of the resulting estimates. Because this expression depends on terms that are unknown in practice, we propose a novel policy evaluation sub-problem, behavior policy search: searching for a behavior policy that reduces mean squared error. We present a behavior policy search algorithm and empirically demonstrate its effectiveness in lowering the mean squared error of policy performance estimates.
Original languageEnglish
Title of host publicationProceedings of the 34th International Conference on Machine Learning
EditorsDoina Precup, Yee Whye Teh
Place of PublicationInternational Convention Centre, Sydney, Australia
PublisherPMLR
Pages1394-1403
Number of pages10
Publication statusPublished - 11 Aug 2017
EventInternational Conference on Machine Learning (ICML) - Sydney, Australia
Duration: 6 Aug 201711 Aug 2017

Publication series

NameProceedings of Machine Learning Research
PublisherPMLR
Volume70
ISSN (Electronic)2640-3498

Conference

ConferenceInternational Conference on Machine Learning (ICML)
Country/TerritoryAustralia
CitySydney
Period6/08/1711/08/17

Fingerprint

Dive into the research topics of 'Data-Efficient Policy Evaluation Through Behavior Policy Search'. Together they form a unique fingerprint.

Cite this