Investigating Negation in Pre-trained Vision-and-language Models

Radina Dobreva, Frank Keller

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Pre-trained vision-and-language models have achieved impressive results on a variety of tasks, including ones that require complex reasoning beyond object recognition. However, little is known about how they achieve these results or what their limitations are. In this paper, we focus on a particular linguistic capability, namely the understanding of negation. We borrow techniques from the analysis of language models to investigate the ability of pretrained vision-and-language models to handle negation. We find that these models severely underperform in the presence of negation.
Original languageEnglish
Title of host publicationProceedings of the 4th BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
EditorsJasmijn Bastings, Yonatan Belinkov, Emmanuel Dupoux, Mario Giulianelli, Dieuwke Hupkes, Yuval Pinter, Hassan Sajjad
Place of PublicationStroudsburg, PA, USA
PublisherACL Anthology
Number of pages13
ISBN (Print)978-1-955917-06-3
Publication statusPublished - 11 Nov 2021
EventBlackboxNLP 2021: Analyzing and interpreting neural networks for NLP - Virtual
Duration: 11 Nov 202111 Nov 2021


ConferenceBlackboxNLP 2021
Internet address


Dive into the research topics of 'Investigating Negation in Pre-trained Vision-and-language Models'. Together they form a unique fingerprint.

Cite this