A Coactive Learning View of Online Structured Prediction in Statistical Machine Translation

Artem Sokolov, Stefan Riezler, Shay B. Cohen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

We present a theoretical analysis of online parameter tuning in statistical machine translation (SMT) from a coactive learning view. This perspective allows us to give regret and generalization bounds for latent perceptron algorithms that are common in SMT, but fall outside of the standard convex optimization scenario. Coactive learning also introduces the concept of weak feedback, which we apply in a proof-of-concept experiment to SMT, showing that learning from feedback that consists of slight improvements over predictions leads to convergence in regret and translation error rate. This suggests that coactive learning might be a viable framework for interactive machine translation. Furthermore, we find that surrogate translations replacing references that are unreachable in the decoder search space can be interpreted as weak feedback and lead to convergence in learning, if they admit an underlying linear model.
Original languageEnglish
Title of host publicationProceedings of the Nineteenth Conference on Computational Natural Language Learning
PublisherAssociation for Computational Linguistics
Number of pages11
Publication statusPublished - 2015


Dive into the research topics of 'A Coactive Learning View of Online Structured Prediction in Statistical Machine Translation'. Together they form a unique fingerprint.

Cite this