Language Models are Poor Learners of Directional Inference

Tianyi Li, Javad Hosseini, Sabine Weber, Mark Steedman

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

We examine LMs’ competence of directional predicate entailments by supervised fine-tuning with prompts. Our analysis shows that contrary to their apparent success on standard NLI, LMs show limited ability to learn such directional inference; moreover, existing datasets fail to test directionality, and/or are infested by artefacts that can be learnt as proxy for entailments, yielding over-optimistic results. In response, we present BoOQA (Boolean Open QA), a robust multi-lingual evaluation benchmark for directional predicate entailments, extrinsic to existing training sets. On BoOQA, we establish baselines and show evidence of existing LM-prompting models being incompetent directional entailment learners, in contrast to entailment graphs, however limited by sparsity.
Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics: EMNLP 2022
PublisherAssociation for Computational Linguistics (ACL)
Pages903-921
Number of pages19
DOIs
Publication statusPublished - 7 Dec 2022
EventThe 2022 Conference on Empirical Methods in Natural Language Processing - Abu Dhabi National Exhibition Centre, Abu Dhabi, United Arab Emirates
Duration: 7 Dec 202211 Dec 2022
Conference number: 27
https://2022.emnlp.org/

Conference

ConferenceThe 2022 Conference on Empirical Methods in Natural Language Processing
Abbreviated titleEMNLP 2022
Country/TerritoryUnited Arab Emirates
CityAbu Dhabi
Period7/12/2211/12/22
Internet address

Fingerprint

Dive into the research topics of 'Language Models are Poor Learners of Directional Inference'. Together they form a unique fingerprint.

Cite this