Adding population structure to models of language evolution by iterated learning

Andrew Whalen, Thomas L. Griffiths

Research output: Contribution to journalArticlepeer-review


Previous work on iterated learning, a standard language learning paradigm where a sequence of learners learns a language from a previous learner, has found that if learners use a form of Bayesian inference, then the distribution of languages in a population will come to reflect the prior distribution assumed by the learners (Griffiths and Kalish 2007). We expand these results to allow for more complex population structures, and demonstrate that for learners on undirected graphs the distribution of languages will also reflect the prior distribution. We then use techniques borrowed from statistical physics to obtain deeper insight into language evolution, finding that although population structure will not influence the probability that an individual speaks a given language, it will influence how likely neighbors are to speak the same language. These analyses lift a restrictive assumption of iterated learning, and suggest that experimental and mathematical findings using iterated learning may apply to a wider range of settings.
Original languageEnglish
Pages (from-to)1-6
JournalJournal of Mathematical Psychology
Early online date5 Dec 2016
Publication statusPublished - 1 Feb 2017

Fingerprint Dive into the research topics of 'Adding population structure to models of language evolution by iterated learning'. Together they form a unique fingerprint.

Cite this