Learning Structured Text Representations

Yang Liu, Maria Lapata

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper, we focus on learning structureaware document representations from data without recourse to a discourse parser or additional annotations. Drawing inspiration from recent efforts to empower neural networks with a structural bias (Cheng et al., 2016; Kim et al., 2017), we propose a model that can encode a document while automatically inducing rich structural
dependencies. Specifically, we embed a differentiable non-projective parsing algorithm into a neural model and use attention mechanisms to incorporate the structural biases. Experimental evaluations across different tasks and datasets show that the proposed model achieves state-of-the-art results on document modeling tasks while inducing intermediate structures which are both interpretable and meaningful.
Original languageEnglish
Pages (from-to)63-76
Number of pages14
JournalTransactions of the Association for Computational Linguistics
Volume6
Publication statusPublished - 1 Jan 2018

Fingerprint

Dive into the research topics of 'Learning Structured Text Representations'. Together they form a unique fingerprint.

Cite this