Unbounded Dependency Recovery for Parser Evaluation

Laura Rimell, Stephen Clark, Mark Steedman

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

This paper introduces a new parser evaluation corpus containing around 700 sentences annotated with unbounded dependencies, from seven different grammatical constructions. We run a series of off-the-shelf parsers on the corpus to evaluate how well state-of-the-art parsing technology is able to recover such dependencies. The overall results range from 25% accuracy to 59%. These low scores call into question the validity of using Parseval scores as a general measure of parsing capability. We discuss the importance of parsers being able to recover unbounded dependencies, given their relatively low frequency in corpora. We also analyse the various errors made on these constructions by one of the more successful parsers.
Original languageEnglish
Title of host publicationProceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, EMNLP 2009, 6-7 August 2009, Singapore, A meeting of SIGDAT, a Special Interest Group of the ACL
PublisherAssociation for Computational Linguistics
Pages813-821
Number of pages9
Publication statusPublished - 2009

Fingerprint

Dive into the research topics of 'Unbounded Dependency Recovery for Parser Evaluation'. Together they form a unique fingerprint.

Cite this