Weakly-supervised Neural Semantic Parsing with a Generative Ranker

Jianpeng Cheng, Maria Lapata

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Weakly-supervised semantic parsers are trained on utterance-denotation pairs, treating logical forms as latent. The task is challenging due to the large search space and spuriousness of logical forms. In this paper we introduce a neural
parser-ranker system which addresses both challenges based on three innovations: (a) candidate (tree-structured) logical forms in our model are ranked based on two criteria, i.e., whether they are likely to execute to the correct denotation and the degree to which they preserve the meaning of the utterance; (b) a scheduled training procedure effectively balances the contribution of the two objectives; (c) a neurally encoded lexicon is used to inject prior domain knowledge to the model. Experiments on three Freebase datasets demonstrate the effectiveness of our semantic parser, achieving state-of-the-art results.
Original languageEnglish
Title of host publicationSIGNLL Conference on Computational Natural Language Learning (CoNLL 2018)
Place of PublicationBrussels, Belgium
PublisherAssociation for Computational Linguistics
Number of pages12
Publication statusPublished - Oct 2018
EventSIGNLL Conference on Computational Natural Language Learning - Brussels, Belgium
Duration: 31 Oct 20181 Nov 2018


ConferenceSIGNLL Conference on Computational Natural Language Learning
Abbreviated titleCoNLL 2018
Internet address


Dive into the research topics of 'Weakly-supervised Neural Semantic Parsing with a Generative Ranker'. Together they form a unique fingerprint.

Cite this