Projects per year
Extractive summarization models require sentence-level labels, which are usually created heuristically (e.g., with rule-based methods) given that most summarization datasets only have document-summary pairs. Since these labels might be suboptimal, we propose a latent variable extractive model where sentences are viewed as latent variables and sentences with activated variables are used to infer gold summaries. During training the loss comes directly from gold summaries. Experiments on the CNN/Dailymail dataset show that our model improves over a strong extractive baseline trained on heuristically approximated labels and also performs competitively to several recent models.
|Title of host publication||Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing|
|Place of Publication||Brussels, Belgium|
|Publisher||Association for Computational Linguistics|
|Number of pages||6|
|Publication status||Published - Nov 2018|
|Event||2018 Conference on Empirical Methods in Natural Language Processing - Square Meeting Center, Brussels, Belgium|
Duration: 31 Oct 2018 → 4 Nov 2018
|Conference||2018 Conference on Empirical Methods in Natural Language Processing|
|Abbreviated title||EMNLP 2018|
|Period||31/10/18 → 4/11/18|
FingerprintDive into the research topics of 'Neural Latent Extractive Document Summarization'. Together they form a unique fingerprint.
- 1 Active
1/09/16 → 31/08/22