Normalized Log-Linear Interpolation of Backoff Language Models is Efficient

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We prove that log-linearly interpolated backoff language models can be efficiently and exactly collapsed into a single normalized backoff model, contradicting Hsu (2007). While prior work reported that log-linear interpolation yields lower perplexity than linear interpolation, normalizing at query time was impractical. We normalize the model offline in advance, which is efficient due to a recurrence relationship between the normalizing factors. To tune interpolation weights, we apply Newton’s method to this convex problem and show that the derivatives can be computed efficiently in a batch process. These findings are combined in new open-source interpolationtool, which is distributed with KenLM. With 21 out-of-domain corpora,log-linear interpolation yields 72.58 perplexity on TED talks, compared to 75.91 for linear interpolation.
Original languageEnglish
Title of host publicationProceedings of the 54th Annual Meeting of the Association for Computational Linguistics
Place of PublicationBerlin, Germany
PublisherAssociation for Computational Linguistics (ACL)
Pages876-886
Number of pages11
ISBN (Print)978-1-945626-00-5
DOIs
Publication statusPublished - 12 Aug 2016
Event54th Annual Meeting of the Association for Computational Linguistics - Berlin, Germany
Duration: 7 Aug 201612 Aug 2016
https://mirror.aclweb.org/acl2016/

Conference

Conference54th Annual Meeting of the Association for Computational Linguistics
Abbreviated titleACL 2016
Country/TerritoryGermany
CityBerlin
Period7/08/1612/08/16
Internet address

Fingerprint

Dive into the research topics of 'Normalized Log-Linear Interpolation of Backoff Language Models is Efficient'. Together they form a unique fingerprint.

Cite this