Attainability of Boundary Points under Reinforcement Learning

Ed Hopkins, Martin Posch

Research output: Working paperDiscussion paper

Abstract

This paper investigates the properties of the most common form of reinforcement learning (the "basic model" of Erev and Roth, American Economic Review, 88, 848-881, 1998). Stochastic approximation theory has been used to analyse the local stability of fixed points under this learning process. However, as we show, when such points are on the boundary of the state space, for example, pure strategy equilibria, standard results from the theory of stochastic approximation do not apply. We offer what we believe to be the correct treatment of boundary points, and provide a new and more general result: this model of learning converges with zero probability to fixed points which are unstable under the Maynard Smith or adjusted version of the evolutionary replicator dynamics. For two player games these are the fixed points that are linearly unstable under the standard replicator dynamics.
Original languageEnglish
PublisherEdinburgh School of Economics, University of Edinburgh
Publication statusPublished - Mar 2004

Publication series

NameESE Discussion Papers

Fingerprint

Dive into the research topics of 'Attainability of Boundary Points under Reinforcement Learning'. Together they form a unique fingerprint.

Cite this