The cognitive roots of regularization in language

Vanessa Ferdinand, Simon Kirby, Kenny Smith

Research output: Contribution to journalArticlepeer-review

Abstract

Regularization occurs when the output a learner produces is less variable than the linguistic data they observed. In an artificial language learning experiment, we show that there exist at least two independent sources of regularization bias in cognition: a domain-general source based on cognitive load and a domain-specific source triggered by linguistic stimuli. Both of these factors modulate how frequency information is encoded and produced, but only the production side modulations result in regularization (i.e. cause learners to eliminate variation from the observed input). We formalize the definition of regularization as the reduction of entropy and find that entropy measures are better at identifying regularization behavior than frequency-based analyses. Using our experimental data and a model of cultural transmission, we generate predictions for the amount of regularity that would develop in each experimental condition if the artificial language were transmitted over several generations of learners. Here we find that the effect of cognitive constraints can become more complex when put into the context of cultural evolution: although learning biases certainly carry information about the course of language evolution, we should not expect a one-to-one correspondence between the micro-level processes that regularize linguistic data sets and the macro-level evolution of linguistic regularity.
Original languageEnglish
Pages (from-to)53-68
JournalCognition
Volume184
Early online date18 Dec 2018
DOIs
Publication statusPublished - Mar 2019

Keywords / Materials (for Non-textual outputs)

  • regularisation
  • frequency learning
  • domain generality
  • domain specificity
  • language evolution

Fingerprint

Dive into the research topics of 'The cognitive roots of regularization in language'. Together they form a unique fingerprint.

Cite this