Left language model state for syntactic machine translation

Kenneth Heafield, Hieu Hoang, Philipp Koehn, Tetsuo Kiso, Marcello Federico

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Many syntactic machine translation decoders, including Moses, cdec, and Joshua, implement bottom-up dynamic programming to integrate N-gram language model probabilities into hypothesis scoring. These decoders concatenate hypotheses according to grammar rules, yielding larger hypotheses and eventually complete translations. When hypotheses are concatenated, the language model score is adjusted to account for boundary-crossing n-grams. Words on the boundary of each hypothesis are encoded in state, consisting of left state (the first few words) and right state (the last few words). We speed concatenation by encoding left state using data structure pointers in lieu of vocabulary indices and by avoiding unnecessary queries. To increase the decoder's opportunities to recombine hypothesis, we minimize the number of words encoded by left state. This has the effect of reducing search errors made by the decoder. The resulting gain in model score is smaller than for right state minimization, which we explain by observing a relationship between state minimization and language model probability. With a fixed cube pruning pop limit, we show a 3-6% reduction in CPU time and improved model scores. Reducing the pop limit to the point where model scores tie the baseline yields a net 11% reduction in CPU time.
Original languageEnglish
Title of host publication2011 International Workshop on Spoken Language Translation, IWSLT 2011, San Francisco, CA, USA, December 8-9, 2011
Pages183-190
Number of pages8
Publication statusPublished - 2011

Fingerprint

Dive into the research topics of 'Left language model state for syntactic machine translation'. Together they form a unique fingerprint.

Cite this