Projects per year
Question answering (QA) systems are sensitive to the many different ways natural language expresses the same information need. In this paper we turn to paraphrases as a means of capturing this knowledge and present a general framework which learns felicitous paraphrases for various QA tasks. Our method is trained end-toend using question-answer pairs as a supervision signal. A question and its paraphrases serve as input to a neural scoring model which assigns higher weights to linguistic expressions most likely to yield correct answers. We evaluate our approach on QA over Freebase and answer sentence selection. Experimental results on three datasets show that our framework consistently improves performance, achieving competitive results despite the use of simple QA models.
|Title of host publication||Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing|
|Publisher||Association for Computational Linguistics|
|Number of pages||12|
|Publication status||Published - 11 Sep 2017|
|Event||EMNLP 2017: Conference on Empirical Methods in Natural Language Processing - Copenhagen, Denmark|
Duration: 7 Sep 2017 → 11 Sep 2017
|Conference||EMNLP 2017: Conference on Empirical Methods in Natural Language Processing|
|Abbreviated title||EMNLP 2017|
|Period||7/09/17 → 11/09/17|
FingerprintDive into the research topics of 'Learning to Paraphrase for Question Answering'. Together they form a unique fingerprint.
- 1 Active
1/09/16 → 28/02/22