Abstract
The ability to discover groupings in continuous stimuli on the basis of distributional information is present across species and across perceptual modalities. We investigate the nature of the computations underlying this ability using statistical word segmentation experiments in which we vary the length of sentences, the amount of exposure, and the number of words in the languages being learned. Although the results are intuitive from the perspective of a language learner (longer sentences, less training, and a larger language all make learning more difficult), standard computational proposals fail to capture several of these results. We describe how probabilistic models of segmentation can be modified to take into account some notion of memory or resource limitations in order to provide a closer match to human performance.
Original language | English |
---|---|
Pages (from-to) | 107-125 |
Number of pages | 19 |
Journal | Cognition |
Volume | 117 |
Issue number | 2 |
DOIs | |
Publication status | Published - Nov 2010 |
Keywords / Materials (for Non-textual outputs)
- Statistical learning
- Word segmentation
- Computational modelling
- language acquisition