Can Markov Models Over Minimal Translation Units Help Phrase-Based SMT?

Nadir Durrani, Alexander M. Fraser, Helmut Schmid, Hieu Hoang, Philipp Koehn

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

The phrase-based and N-gram-based SMT frameworks complement each other.
While the former is better able to memorize, the latter provides a more principled model that captures dependencies across phrasal boundaries. Some work has been done to combine insights from these two frameworks. A recent successful attempt showed the advantage of using phrase-based search on top of an N-gram-based model. We probe this question in the reverse direction by investigating whether integrating N-gram-based translation and reordering models into a phrase-based decoder helps overcome the problematic phrasal independence assumption. A large scale evaluation over 8 language pairs shows that performance does significantly
improve.
Original languageEnglish
Title of host publicationProceedings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL 2013, 4-9 August 2013, Sofia, Bulgaria, Volume 2: Short Papers
Pages399-405
Number of pages7
Publication statusPublished - 2013

Fingerprint

Dive into the research topics of 'Can Markov Models Over Minimal Translation Units Help Phrase-Based SMT?'. Together they form a unique fingerprint.

Cite this