Abstract / Description of output
Artificial language learning methods–in which learners are taught miniature constructed languages in a controlled laboratory setting–have become a valuable experimental tool for research on language development. These methods offer a complement to natural language acquisition data, in which both the input to learning and the learning environment can be carefully controlled. To date, a large proportion of artificial language learning studies aim to understand the mechanisms of learning in infants (often using adult learners as a proxy). This review focuses instead on investigations into the nature of early linguistic representations, and how they are influenced by both the structure of the input and by cognitive features of the learner. We summarize significant findings using a wide range of different types of artificial language paradigms, looking not only at young infants, but also at children beyond infancy. We discuss evidence for early abstraction, conditions on generalization, the acquisition of grammatical categories and dependencies, and recent work connecting the cognitive biases of learners to language typology. We end by outlining what we see as the most important areas for future research in this area.
Original language | English |
---|---|
Pages (from-to) | 353-373 |
Journal | Annual Review of Linguistics |
Volume | 5 |
DOIs | |
Publication status | Published - 2019 |
Keywords / Materials (for Non-textual outputs)
- language acquisition
- artificial language learning
- generalization
- linguistic typology