Infinite use of finite means? Evaluating the generalization of center embedding learned from an artificial grammar

R. Thomas McCoy, Jennifer Culbertson, Paul Smolensky, Géraldine Legendre

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Human language is often assumed to make “infinite use of finite means”—that is, to generate an infinite number of possible utterances from a finite number of building blocks. From an acquisition perspective, this assumed property of language is interesting because learners must acquire their languages from a finite number of examples. To acquire an infinite language, learners must therefore generalize beyond the finite bounds of the linguistic data they have observed. In this work, we use an artificial language learning experiment to investigate whether people generalize in this way. We train participants on sequences from a simple grammar featuring center embedding, where the training sequences have at most two levels of embedding, and then evaluate whether participants accept sequences of a greater depth of embedding. We find that, when participants learn the pattern for sequences of the sizes they have observed, they also extrapolate it to sequences with a greater depth of embedding. These results support the hypothesis that the learning biases of humans favor languages with an infinite generative capacity.
Original languageEnglish
Title of host publicationProceedings of the 43rd Annual Meeting of the Cognitive Science Society
Subtitle of host publicationComparative Cognition: Animal Minds, CogSci 2021
PublisherThe Cognitive Science Society
Pages2225-2231
Number of pages7
Volume43
Publication statusPublished - 2021
Event43rd Annual Meeting of the Cognitive Science Society: Comparative Cognition: Animal Minds, CogSci 2021 - Virtual, Online, Austria
Duration: 26 Jul 202129 Jul 2021

Publication series

NameProceedings of the Annual meetuing of the Cognitive Science Society
PublisherThe Cognitive Science Society
Volume43
ISSN (Electronic)1069-7977

Conference

Conference43rd Annual Meeting of the Cognitive Science Society: Comparative Cognition: Animal Minds, CogSci 2021
Country/TerritoryAustria
CityVirtual, Online
Period26/07/2129/07/21

Keywords

  • artificial language learning
  • center embedding
  • extrapolation
  • inductive biases
  • language acquisition

Fingerprint

Dive into the research topics of 'Infinite use of finite means? Evaluating the generalization of center embedding learned from an artificial grammar'. Together they form a unique fingerprint.

Cite this