Abstract
In this paper we present a fully unsupervised syntactic class induction system formulated as a Bayesian multinomial mixture model, where each word type is constrained to belong to a single class. By using a mixture model rather than a sequence model (e.g., HMM), we are able to easily add multiple kinds of features, including those at both the type level (morphology features) and token level (context and alignment features, the latter from parallel corpora). Using only context features, our system yields results comparable to state-of-the art, far better than a similar model without the one-class-per-type constraint. Using the additional features provides added benefit, and our final system outperforms the best published results on most of the 25 corpora tested.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing |
Place of Publication | Edinburgh, Scotland, UK. |
Publisher | Association for Computational Linguistics |
Pages | 638-647 |
Number of pages | 10 |
Publication status | Published - 1 Jul 2011 |