A Parallel Training Algorithm for Hierarchical Pitman-Yor Process Language Models

Songfang Huang, Steve Renals

Research output: Chapter in Book/Report/Conference proceedingConference contribution


The Hierarchical Pitman Yor Process Language Model (HPYLM) is a Bayesian language model based on a non-parametric prior, the Pitman-Yor Process. It has been demonstrated, both theoretically and practically, that the HPYLM can provide better smoothing for language modeling, compared with state-of-the-art approaches such as interpolated Kneser-Ney and modified Kneser-Ney smoothing. However, estimation of Bayesian language models is expensive in terms of both computation time and memory; the inference is approximate and requires a number of iterations to converge. In this paper, we present a parallel training algorithm for the HPYLM, which enables the approach to be applied in the context of automatic speech recognition, using large training corpora with large vocabularies. We demonstrate the effectiveness of the proposed algorithm by estimating language models from corpora for meeting transcription containing over 200 million words, and observe significant reductions in perplexity and word error rate.
Original languageEnglish
Title of host publicationINTERSPEECH-2009
Number of pages3
Publication statusPublished - 2009


Dive into the research topics of 'A Parallel Training Algorithm for Hierarchical Pitman-Yor Process Language Models'. Together they form a unique fingerprint.

Cite this