Learning structured natural language representations for semantic parsing

Jianpeng Cheng, Siva Reddy, Vijay Saraswat, Mirella Lapata

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We introduce a neural semantic parser which is interpretable and scalable. Our model converts natural language utterances to intermediate, domain-general natural language representations in the form of predicate-argument structures, which are induced with a transition system and subsequently mapped to target domains. The semantic parser is trained end-to-end using annotated logical forms or their denotations. We achieve the state of the art on SPADES and GRAPHQUESTIONS and obtain competitive results on GEO-QUERY and WEBQUESTIONS. The induced predicate-argument structures shed light on the types of representations useful for semantic parsing and how these are different from linguistically motivated ones.

Original languageEnglish
Title of host publicationACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers)
PublisherAssociation for Computational Linguistics (ACL)
Pages44-55
Number of pages12
Volume1
ISBN (Electronic)9781945626753
DOIs
Publication statusE-pub ahead of print - 4 Aug 2017
Event55th Annual Meeting of the Association for Computational Linguistics, ACL 2017 - Vancouver, Canada
Duration: 30 Jul 20174 Aug 2017

Conference

Conference55th Annual Meeting of the Association for Computational Linguistics, ACL 2017
Country/TerritoryCanada
CityVancouver
Period30/07/174/08/17

Fingerprint

Dive into the research topics of 'Learning structured natural language representations for semantic parsing'. Together they form a unique fingerprint.

Cite this