Edinburgh Research Explorer

Adding population structure to models of language evolution by iterated learning

Research output: Contribution to journalArticle

Related Edinburgh Organisations

Open Access permissions

Open

Documents

  • Download as Adobe PDF

    Accepted author manuscript, 1.13 MB, PDF document

    Licence: Creative Commons: Attribution-NonCommercial-NoDerivatives (CC BY-NC-ND)

Original languageEnglish
Pages (from-to)1-6
JournalJournal of Mathematical Psychology
Volume76
Early online date5 Dec 2016
DOIs
Publication statusPublished - 1 Feb 2017

Abstract

Previous work on iterated learning, a standard language learning paradigm where a sequence of learners learns a language from a previous learner, has found that if learners use a form of Bayesian inference, then the distribution of languages in a population will come to reflect the prior distribution assumed by the learners (Griffiths and Kalish 2007). We expand these results to allow for more complex population structures, and demonstrate that for learners on undirected graphs the distribution of languages will also reflect the prior distribution. We then use techniques borrowed from statistical physics to obtain deeper insight into language evolution, finding that although population structure will not influence the probability that an individual speaks a given language, it will influence how likely neighbors are to speak the same language. These analyses lift a restrictive assumption of iterated learning, and suggest that experimental and mathematical findings using iterated learning may apply to a wider range of settings.

Download statistics

No data available

ID: 30948133