Abstract / Description of output
Developing better methods for segmenting continuous text into words is important for improving the processing of Asian languages, and may shed light on how humans learn to segment speech. We propose two new Bayesian word segmentation methods that assume unigram and bigram models of word dependencies respectively. The bigram model greatly out-performs the unigram model (and previous probabilistic models), demonstrating the importance of such dependencies for word segmentation. We also show that previous probabilistic models rely crucially on sub-optimal search procedures.
Original language | English |
---|---|
Title of host publication | Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics |
Place of Publication | Sydney, Australia |
Publisher | Association for Computational Linguistics |
Pages | 673-680 |
Number of pages | 8 |
DOIs | |
Publication status | Published - 1 Jul 2006 |