Projects per year
Abstract
Extractive summarization models require sentence-level labels, which are usually created heuristically (e.g., with rule-based methods) given that most summarization datasets only have document-summary pairs. Since these labels might be suboptimal, we propose a latent variable extractive model where sentences are viewed as latent variables and sentences with activated variables are used to infer gold summaries. During training the loss comes directly from gold summaries. Experiments on the CNN/Dailymail dataset show that our model improves over a strong extractive baseline trained on heuristically approximated labels and also performs competitively to several recent models.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing |
Place of Publication | Brussels, Belgium |
Publisher | Association for Computational Linguistics |
Pages | 779-784 |
Number of pages | 6 |
Publication status | Published - Nov 2018 |
Event | 2018 Conference on Empirical Methods in Natural Language Processing - Square Meeting Center, Brussels, Belgium Duration: 31 Oct 2018 → 4 Nov 2018 http://emnlp2018.org/ |
Conference
Conference | 2018 Conference on Empirical Methods in Natural Language Processing |
---|---|
Abbreviated title | EMNLP 2018 |
Country/Territory | Belgium |
City | Brussels |
Period | 31/10/18 → 4/11/18 |
Internet address |
Fingerprint
Dive into the research topics of 'Neural Latent Extractive Document Summarization'. Together they form a unique fingerprint.Projects
- 1 Finished
-
TransModal: Translating from Multiple Modalities into Text
Lapata, M. (Principal Investigator)
1/09/16 → 31/08/22
Project: Research