Two Competing Models of How People Learn in Games

Research output: Contribution to journalArticlepeer-review

Abstract

Reinforcement learning and stochastic fictitious play are apparent rivals as models of human learning. They embody quite different assumptions about the processing of information and optimization. This paper compares their properties and finds that they are far more similar than were thought. In particular, the expected motion of stochastic fictitious play and reinforcement learning with experimentation can both be written as a perturbed form of the evolutionary replicator dynamics. Therefore they will in many cases have the same asymptotic behavior. In particular, local stability of mixed equilibria under stochastic fictitious play implies local stability under perturbed reinforcement learning. The main identifiable difference between the two models is speed: stochastic fictitious play gives rise to faster learning.
Original languageEnglish
Pages (from-to)2141-2166
Number of pages26
JournalEconometrica
Volume70
Issue number6
DOIs
Publication statusPublished - Nov 2002

Keywords / Materials (for Non-textual outputs)

  • games
  • reinforcements learning
  • fictitious play

Fingerprint

Dive into the research topics of 'Two Competing Models of How People Learn in Games'. Together they form a unique fingerprint.

Cite this