Investigating Negation in Pre-trained Vision-and-language Models

Radina Dobreva, Frank Keller

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Pre-trained vision-and-language models have achieved impressive results on a variety of tasks, including ones that require complex reasoning beyond object recognition. However, little is known about how they achieve these results or what their limitations are. In this paper, we focus on a particular linguistic capability, namely the understanding of negation. We borrow techniques from the analysis of language models to investigate the ability of pretrained vision-and-language models to handle negation. We find that these models severely underperform in the presence of negation.
Original languageEnglish
Title of host publicationProceedings of the 4th BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
EditorsJasmijn Bastings, Yonatan Belinkov, Emmanuel Dupoux, Mario Giulianelli, Dieuwke Hupkes, Yuval Pinter, Hassan Sajjad
Place of PublicationStroudsburg, PA, USA
PublisherACL Anthology
Pages350-362
Number of pages13
ISBN (Print)978-1-955917-06-3
DOIs
Publication statusPublished - 11 Nov 2021
EventBlackboxNLP 2021: Analyzing and interpreting neural networks for NLP - Virtual
Duration: 11 Nov 202111 Nov 2021
https://blackboxnlp.github.io/

Conference

ConferenceBlackboxNLP 2021
Period11/11/2111/11/21
Internet address

Fingerprint

Dive into the research topics of 'Investigating Negation in Pre-trained Vision-and-language Models'. Together they form a unique fingerprint.

Cite this