N-gram Counts and Language Models from the Common Crawl

Christian Buck, Kenneth Heafield, Bas van Ooyen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

We contribute 5-gram counts and language models trained on the Common Crawl corpus, a collection over 9 billion web pages. This release improves upon the Google n-gram counts in two key ways: the inclusion of low-count entries and deduplication to reduce boilerplate. By preserving singletons, we were able to use Kneser-Ney smoothing to build large language models. This paper describes how the corpus was processed with emphasis on the problems that arise in working with data at this scale. Our unpruned Kneser-Ney English 5-gram language model, built on 975 billion deduplicated tokens, contains over 500 billion unique n-grams. We show gains of 0.5–1.4 BLEU by using large language models to translate into various languages.
Original languageEnglish
Title of host publicationProceedings of the Language Resources and Evaluation Conference 2014
Pages3579-3584
Number of pages6
Publication statusPublished - 1 May 2014

Fingerprint

Dive into the research topics of 'N-gram Counts and Language Models from the Common Crawl'. Together they form a unique fingerprint.

Cite this