Trees neural those: RNNs can learn the hierarchical structure of noun phrases

Yevgen Matusevych, Jennifer Culbertson

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Humans use both linear and hierarchical representations in language processing, and the exact role of each has been debated. One domain where hierarchical processing is important is noun phrases. English noun phrases have a fixed order of prenominal modifiers: demonstratives - numerals - adjectives (these two green vases). However, when English speakers learn an artificial language with postnominal modifiers, instead of reproducing this linear order they preserve the distance between each modifier and the noun (vases green two these). This has been explained by a hierarchical homomorphism bias. Here, we investigate whether RNNs exhibit this bias. We pre-train one linear and two hierarchical models on English and expose them to a small artificial language. We then test them on noun phrases from a study with humans and find that only the hierarchical models can exhibit the bias, supporting the idea that homomorphic word order preferences arise from hierarchical, and not linear relations.
Original languageEnglish
Title of host publicationProceedings of the 44th Annual Conference of the Cognitive Science Society
EditorsJennifer Culbertson, Andrew Perfors, Hugh Rabagliati, Veronica Ramenzoni
PublishereScholarship University of California
Publication statusE-pub ahead of print - 17 Jun 2022
Event44th Annual Meeting of the Cognitive Science Society - Toronto, Canada
Duration: 27 Jul 202230 Jul 2022
Conference number: 44

Publication series

NameProceedings of the Annual Conference of the Cognitive Science Society
PublisherCognitive Science Society
ISSN (Electronic)1069-7977


Conference44th Annual Meeting of the Cognitive Science Society
Abbreviated titleCogSci 2022
Internet address


  • hierarchical processing
  • noun phrase
  • artificial language learning
  • neural networks
  • homomorphism

Cite this