Abstract
Higher-order dependency features are known to improve dependency parser
accuracy. We investigate the incorporation of such features into a cube decoding phrase-structure parser. We find considerable gains in accuracy on the range of standard metrics. What is especially interesting is that we find strong, statistically significant gains on dependency recovery on out-of-domain tests (Brown vs. WSJ). This suggests that higher-order dependency features are not simply over-fitting the training material.
accuracy. We investigate the incorporation of such features into a cube decoding phrase-structure parser. We find considerable gains in accuracy on the range of standard metrics. What is especially interesting is that we find strong, statistically significant gains on dependency recovery on out-of-domain tests (Brown vs. WSJ). This suggests that higher-order dependency features are not simply over-fitting the training material.
Original language | English |
---|---|
Title of host publication | Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) |
Place of Publication | Sofia, Bulgaria |
Publisher | Association for Computational Linguistics |
Pages | 610-616 |
Number of pages | 7 |
Publication status | Published - 1 Aug 2013 |