Text Generation from Discourse Representation Structures

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We propose neural models to generate text from formal meaning representations based on Discourse Representation Structures (DRSs). DRSs are document-level representations which encode rich semantic detail pertaining to rhetorical relations, presupposition, and co-reference within and across sentences. We formalize the task of neural DRS-to-text generation and provide modeling solutions for the problems of condition ordering and variable naming which render generation from DRSs non-trivial. Our generator relies on a novel sibling treeLSTM model which is able to accurately represent DRS structures and is more generally suited to trees with wide branches. We achieve competitive performance (59.48 BLEU) on the GMB benchmark against several strong baselines.
Original languageEnglish
Title of host publicationProceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Place of PublicationOnline
PublisherAssociation for Computational Linguistics
Pages397-415
Number of pages19
ISBN (Print)978-1-954085-46-6
DOIs
Publication statusPublished - 6 Jun 2021
Event2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics -
Duration: 6 Jun 202111 Jun 2021
https://2021.naacl.org/

Conference

Conference2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics
Abbreviated titleNAACL 2021
Period6/06/2111/06/21
Internet address

Fingerprint

Dive into the research topics of 'Text Generation from Discourse Representation Structures'. Together they form a unique fingerprint.

Cite this