Learning first-pass structural attachment preferences with dynamic grammars and recursive neural networks

Patrick Sturt, Fabrizio Costa, Vincenzo Lombardo, Paolo Frasconi

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

One of the central problems in the study of human language processing is ambiguity resolution: how do people resolve the extremely pervasive ambiguity of the language they encounter? One possible answer to this question is suggested by experience-based models, which claim that people typically resolve ambiguities in a way which has been successful in the past. In order to determine the course of action that has been "successful in the past" when faced with some ambiguity, it is necessary to generalize over past experience. In this paper, we will present a computational experience-based model, which learns to generalize over linguistic experience from exposure to syntactic structures in a corpus. The model is a hybrid system, which uses symbolic grammars to build and represent syntactic structures, and neural networks to rank these structures on the basis of its experience. We use a dynamic grammar, which provides a very tight correspondence between grammatical derivations and incremental processing, and recursive neural networks, which are able to deal with the complex hierarchical structures produced by the grammar. We demonstrate that the model reproduces a number of the structural preferences found in the experimental psycholinguistics literature, and also performs well on unrestricted text.
Original languageEnglish
Pages (from-to)133-69
Number of pages37
Issue number2
Publication statusPublished - 2003

Keywords / Materials (for Non-textual outputs)

  • Choice Behavior
  • Humans
  • Linguistics
  • Neural Networks (Computer)
  • Psycholinguistics
  • Verbal Learning


Dive into the research topics of 'Learning first-pass structural attachment preferences with dynamic grammars and recursive neural networks'. Together they form a unique fingerprint.

Cite this