Attainability of Boundary Points under Reinforcement Learning

Ed Hopkins, Martin Posch

Research output: Working paperDiscussion paper

Abstract / Description of output

This paper investigates the properties of the most common form of reinforcement learning (the "basic model" of Erev and Roth, American Economic Review, 88, 848-881, 1998). Stochastic approximation theory has been used to analyse the local stability of fixed points under this learning process. However, as we show, when such points are on the boundary of the state space, for example, pure strategy equilibria, standard results from the theory of stochastic approximation do not apply. We offer what we believe to be the correct treatment of boundary points, and provide a new and more general result: this model of learning converges with zero probability to fixed points which are unstable under the Maynard Smith or adjusted version of the evolutionary replicator dynamics. For two player games these are the fixed points that are linearly unstable under the standard replicator dynamics.

(This abstract was borrowed from another version of this item.)

Original languageEnglish
PublisherDavid K. Levine
Number of pages17
Publication statusPublished - Jul 2003

Publication series

NameLevine's Working Paper Archive

Keywords / Materials (for Non-textual outputs)

  • learning in games
  • reinforcement learning
  • stochastic aproximation
  • replicator dynamics


Dive into the research topics of 'Attainability of Boundary Points under Reinforcement Learning'. Together they form a unique fingerprint.

Cite this