A Bottom-Up Parsing Model of Local Coherence Effects

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Human sentence processing occurs incrementally. Most models of human processing rely on parsers that always build connected tree structures. But according to the theory of Good Enough parsing (Ferreira & Patson, 2007), humans parse sentences using small chunks of local information, not always
forming a globally coherent parse. This difference is apparent in the study of local coherence effects (Tabor, Galantucci, & Richardson, 2004), wherein a locally plausible interpretation interferes with the correct global interpretation of a sentence. We present a model that accounts for these effects using a wide-coverage parser that captures the idea of Good Enough parsing. Using Combinatory Categorial Grammar, our parser works bottom-up, enforcing the use of local information only. We model the difficulty of processing a sentence in terms of the probability of a locally coherent reading relative to the probability
of the globally coherent reading of the sentence. Our model successfully predicts psycholinguistic results.
Original languageEnglish
Title of host publicationProceedings of the 32nd Annual Conference of the Cognitive Science Society
Number of pages6
Publication statusPublished - 2010


Dive into the research topics of 'A Bottom-Up Parsing Model of Local Coherence Effects'. Together they form a unique fingerprint.

Cite this