Projects per year
Abstract
In this paper, we focus on learning structureaware document representations from data without recourse to a discourse parser or additional annotations. Drawing inspiration from recent efforts to empower neural networks with a structural bias (Cheng et al., 2016; Kim et al., 2017), we propose a model that can encode a document while automatically inducing rich structural
dependencies. Specifically, we embed a differentiable non-projective parsing algorithm into a neural model and use attention mechanisms to incorporate the structural biases. Experimental evaluations across different tasks and datasets show that the proposed model achieves state-of-the-art results on document modeling tasks while inducing intermediate structures which are both interpretable and meaningful.
dependencies. Specifically, we embed a differentiable non-projective parsing algorithm into a neural model and use attention mechanisms to incorporate the structural biases. Experimental evaluations across different tasks and datasets show that the proposed model achieves state-of-the-art results on document modeling tasks while inducing intermediate structures which are both interpretable and meaningful.
Original language | English |
---|---|
Pages (from-to) | 63-76 |
Number of pages | 14 |
Journal | Transactions of the Association for Computational Linguistics |
Volume | 6 |
Publication status | Published - 1 Jan 2018 |
Fingerprint
Dive into the research topics of 'Learning Structured Text Representations'. Together they form a unique fingerprint.Projects
- 1 Finished
-
TransModal: Translating from Multiple Modalities into Text
Lapata, M. (Principal Investigator)
1/09/16 → 31/08/22
Project: Research