Complementary Roles of Inference and Language Models in QA

Liang Cheng, Javad Hosseini, Mark Steedman

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Answering open-domain questions through unsupervised methods poses challenges for both machine-reading (MR) and language model (LM) -based approaches. The MR-based approach suffers from sparsity issues in extracted knowledge graphs (KGs), while the performance of the LM-based approach significantly depends on the quality of the retrieved context for questions. In this paper, we compare these approaches and propose a novel methodology that leverages directional predicate entailment (inference) to address these limitations. We use entailment graphs (EGs), with natural language predicates as nodes and entailment as edges, to enhance parsed KGs by inferring unseen assertions, effectively mitigating the sparsity problem in the MR-based approach. We also show EGs improve context retrieval for the LM-based approach. Additionally, we present a Boolean QA task, demonstrating that EGs exhibit comparable directional inference capabilities to large language models (LLMs). Our results highlight the importance of inference in open-domain QA and the improvements brought by leveraging EGs.
Original languageEnglish
Title of host publicationProceedings of the 2nd Workshop on Pattern-based Approaches to NLP in the Age of Deep Learning
PublisherAssociation for Computational Linguistics
Pages75 – 91
Number of pages17
ISBN (Electronic)9798891760462
DOIs
Publication statusPublished - 6 Dec 2023
EventThe 2023 Conference on Empirical Methods in Natural Language Processing - , Singapore
Duration: 6 Dec 202310 Dec 2023
https://2023.emnlp.org/

Conference

ConferenceThe 2023 Conference on Empirical Methods in Natural Language Processing
Abbreviated titleEMNLP 2023
Country/TerritorySingapore
Period6/12/2310/12/23
Internet address

Fingerprint

Dive into the research topics of 'Complementary Roles of Inference and Language Models in QA'. Together they form a unique fingerprint.

Cite this