Edinburgh Research Explorer

Discourse Representation Structure Parsing with Recurrent Neural Networks and the Transformer Model

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Related Edinburgh Organisations

Open Access permissions



  • Download as Adobe PDF

    Final published version, 175 KB, PDF document

    Licence: Creative Commons: Attribution (CC-BY)

Original languageEnglish
Title of host publicationProceedings of the IWCS Shared Task on Semantic Parsing
Place of PublicationGothenburg, Sweden
PublisherAssociation for Computational Linguistics
Number of pages6
Publication statusPublished - 23 May 2019


We describe the systems we developed for Discourse Representation Structure (DRS) parsing as part of the IWCS-2019 Shared Task of DRS Parsing.1 Our systems are based on sequence-to- sequence modeling. To implement our model, we use the open-source neural machine translation system implemented in PyTorch, OpenNMT-py. We experimented with a variety of encoder-decoder models based on recurrent neural networks and the Transformer model. We conduct experiments on the standard benchmark of the Parallel Meaning Bank (PMB 2.2). Our best system achieves a score of 84.8% F1 in the DRS parsing shared task.

Download statistics

No data available

ID: 94006751