Projects per year
Abstract
In this paper we present a fully unsupervised syntactic class induction system formulated as a Bayesian multinomial mixture model, where each word type is constrained to belong to a single class. By using a mixture model rather than a sequence model (e.g., HMM), we are able to easily add multiple kinds of features, including those at both the type level (morphology features) and token level (context and alignment features, the latter from parallel corpora). Using only context features, our system yields results comparable to state-of-the art, far better than a similar model without the one-class-per-type constraint. Using the additional features provides added benefit, and our final system outperforms the best published results on most of the 25 corpora tested.
Original language | English |
---|---|
Title of host publication | Proceedings of the Conference on Empirical Methods in Natural Language Processing |
Place of Publication | Stroudsburg, PA, USA |
Publisher | Association for Computational Linguistics |
Pages | 638-647 |
Number of pages | 10 |
ISBN (Print) | 978-1-937284-11-4 |
Publication status | Published - 2011 |
Publication series
Name | EMNLP '11 |
---|---|
Publisher | Association for Computational Linguistics |
Fingerprint
Dive into the research topics of 'A Bayesian Mixture Model for Part-of-speech Induction Using Multiple Features'. Together they form a unique fingerprint.Projects
- 1 Finished