Projects per year
We propose neural models to generate text from formal meaning representations based on Discourse Representation Structures (DRSs). DRSs are document-level representations which encode rich semantic detail pertaining to rhetorical relations, presupposition, and co-reference within and across sentences. We formalize the task of neural DRS-to-text generation and provide modeling solutions for the problems of condition ordering and variable naming which render generation from DRSs non-trivial. Our generator relies on a novel sibling treeLSTM model which is able to accurately represent DRS structures and is more generally suited to trees with wide branches. We achieve competitive performance (59.48 BLEU) on the GMB benchmark against several strong baselines.
|Title of host publication||Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies|
|Place of Publication||Online|
|Publisher||Association for Computational Linguistics|
|Number of pages||19|
|Publication status||Published - 6 Jun 2021|
|Event||2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics - Online|
Duration: 6 Jun 2021 → 11 Jun 2021
|Conference||2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics|
|Abbreviated title||NAACL 2021|
|Period||6/06/21 → 11/06/21|
FingerprintDive into the research topics of 'Text Generation from Discourse Representation Structures'. Together they form a unique fingerprint.
1/09/16 → 28/02/22
1/02/16 → 31/01/19