Abstract / Description of output
Translation model size is growing at a pace that outstrips improvements in computing power, and this hinders research on many interesting models. We show how an algorithmic scaling technique can be used to easily handle very large models. Using this technique, we explore several large model variants and show an improvement 1.4 BLEU on the NIST 2006 Chinese-English task. This opens the door for work on a variety of models that are much less constrained by computational limitations.
|Title of host publication||Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008)|
|Place of Publication||Manchester, UK|
|Publisher||Coling 2008 Organizing Committee|
|Number of pages||8|
|Publication status||Published - 1 Aug 2008|