Learning to Prune: Context-Sensitive Pruning for Syntactic MT

Wenduan Xu, Yue Zhang, Philip Williams, Philipp Koehn

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

We present a context-sensitive chart pruning method for CKY-style MT decoding. Source phrases that are unlikely to have aligned target constituents are identified using sequence labellers learned from the parallel corpus, and speed-up is obtained by pruning corresponding chart cells. The proposed method is easy to implement, orthogonal to cube pruning and additive to its pruning power. On a full-scale Englishto-German experiment with a string-to-tree model, we obtain a speed-up of more than 60% over a strong baseline, with no loss in BLEU.
Original languageEnglish
Title of host publicationProceedings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL 2013, 4-9 August 2013, Sofia, Bulgaria, Volume 2: Short Papers
Pages352-357
Number of pages6
Publication statusPublished - Aug 2013

Fingerprint

Dive into the research topics of 'Learning to Prune: Context-Sensitive Pruning for Syntactic MT'. Together they form a unique fingerprint.

Cite this