TY - GEN
T1 - Infinite use of finite means?
T2 - 43rd Annual Meeting of the Cognitive Science Society: Comparative Cognition: Animal Minds, CogSci 2021
AU - McCoy, R. Thomas
AU - Culbertson, Jennifer
AU - Smolensky, Paul
AU - Legendre, Géraldine
N1 - Funding Information: For helpful comments, we are grateful to Grusha Prasad, Najoung Kim, Tal Linzen, Robert Frank, the JHU Neurosymbolic Computation Lab, and the NYU CAP Lab. This research was supported by NSF GRFP No. 1746891.
PY - 2021
Y1 - 2021
N2 - Human language is often assumed to make “infinite use of finite means”—that is, to generate an infinite number of possible utterances from a finite number of building blocks. From an acquisition perspective, this assumed property of language is interesting because learners must acquire their languages from a finite number of examples. To acquire an infinite language, learners must therefore generalize beyond the finite bounds of the linguistic data they have observed. In this work, we use an artificial language learning experiment to investigate whether people generalize in this way. We train participants on sequences from a simple grammar featuring center embedding, where the training sequences have at most two levels of embedding, and then evaluate whether participants accept sequences of a greater depth of embedding. We find that, when participants learn the pattern for sequences of the sizes they have observed, they also extrapolate it to sequences with a greater depth of embedding. These results support the hypothesis that the learning biases of humans favor languages with an infinite generative capacity.
AB - Human language is often assumed to make “infinite use of finite means”—that is, to generate an infinite number of possible utterances from a finite number of building blocks. From an acquisition perspective, this assumed property of language is interesting because learners must acquire their languages from a finite number of examples. To acquire an infinite language, learners must therefore generalize beyond the finite bounds of the linguistic data they have observed. In this work, we use an artificial language learning experiment to investigate whether people generalize in this way. We train participants on sequences from a simple grammar featuring center embedding, where the training sequences have at most two levels of embedding, and then evaluate whether participants accept sequences of a greater depth of embedding. We find that, when participants learn the pattern for sequences of the sizes they have observed, they also extrapolate it to sequences with a greater depth of embedding. These results support the hypothesis that the learning biases of humans favor languages with an infinite generative capacity.
KW - artificial language learning
KW - center embedding
KW - extrapolation
KW - inductive biases
KW - language acquisition
M3 - Conference contribution
AN - SCOPUS:85139425006
VL - 43
T3 - Proceedings of the Annual meetuing of the Cognitive Science Society
SP - 2225
EP - 2231
BT - Proceedings of the 43rd Annual Meeting of the Cognitive Science Society
PB - The Cognitive Science Society
Y2 - 26 July 2021 through 29 July 2021
ER -