Two Competing Models of How People Learn in Games

Research output: Working paperDiscussion paper

Abstract / Description of output

Reinforcement learning and stochastic fictitious play are apparent rivals as models of human learning. The embody quite different assumptions about the processing of information and optimisation. This paper compares their properties and finds that they are far more similar than were thought. In particular, the expected motion of stochastic fictitious play and reinforcement learning with experimentation can both be written as a perturbed form of the evolutionary replicator dynamics. Therefore they will in many cases have the same asymptotic behaviour. In particular, local stability of mixed equilibria under stochastic fictitious play implies local stability under perturbed reinforcement learning. The main identifiable difference between the two models is speed: stochastic fictitious play gives rise to faster learning.
Original languageEnglish
PublisherEdinburgh School of Economics, University of Edinburgh
Pages1-38
Number of pages38
Publication statusPublished - Oct 1999

Publication series

NameESE Discussion Papers

Fingerprint

Dive into the research topics of 'Two Competing Models of How People Learn in Games'. Together they form a unique fingerprint.

Cite this