We report a study using the “visual-world” paradigm that investigated (1) the time-course of phonological prediction in English by native (L1) and non-native (L2)speakers whose native language was Japanese, and (2) whether the Japanese participants predicted phonological form in Japanese. Participants heard sentences which contained a highly predictable word (e.g., cloud, following The tourists expected rain when the sun went behind the …), and viewed an array of objects containing a target object which corresponded to the predictable word [cloud; Japanese: kumo], an English competitor object whose English name was phonologically related to the predictable word [clown;piero], a Japanese competitor object whose Japanese name was phonologically related to the Japanese translation of the predictable word [bear; kuma], or an object that was unrelated to the predictable word [globe; tikyuugi]. Both L1 and L2 speakers looked predictively at the target object, but L2 speakers were slower than L1 speakers. L1 speakers looked predictively at the English competitor object, but L2 speakers did not do so predictively. Neither group looked at the Japanese competitor object more than the unrelated object. Thus, people can predict phonological information in their native language but may not do so in non-native languages.
FingerprintDive into the research topics of 'Investigating the time-course of phonological prediction in native and non-native speakers of English: A visual world eye-tracking study'. Together they form a unique fingerprint.
- School of Philosophy, Psychology and Language Sciences - Personal Chair of Speech, Language and Cognition
- Edinburgh Neuroscience
Person: Academic: Research Active