Translating Negation: Induction, Search And Model Errors

Federico Fancellu, Bonnie Webber

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Statistical Machine Translation systems show considerably worse performance in translating negative sentences than positive ones (Fancellu and Webber, 2014; Wetzel and Bond, 2012). Various techniques have addressed the problem of translating negation, but their underlying assumptions have never been validated by a proper error analysis. A related paper (Fancellu and Webber, 2015) reports on a manual error analysis of the kinds of errors involved in translating negation. The present paper presents ongoing work to discover their causes by considering which, if any, are induction, search or model errors. We show that standard oracle decoding techniques provide little help due to the locality of negation scope and their reliance on a single reference. We are working to address these weaknesses using a chart analysis based on oracle hypotheses, guided by the negation elements contained in a source span and by how these elements are expected to be translated at each decoding step. Preliminary results show chart analysis is able to give a more in-depth analysis of the above errors and better explains the results of the manual analysis.
Original languageEnglish
Title of host publicationProceedings of the Ninth Workshop on Syntax, Semantics and Structure in Statistical Translation
Place of PublicationDenver, Colorado, USA
PublisherAssociation for Computational Linguistics
Number of pages9
Publication statusPublished - 1 Jun 2015


Dive into the research topics of 'Translating Negation: Induction, Search And Model Errors'. Together they form a unique fingerprint.

Cite this