Abstract
Pre-trained vision-and-language models have achieved impressive results on a variety of tasks, including ones that require complex reasoning beyond object recognition. However, little is known about how they achieve these results or what their limitations are. In this paper, we focus on a particular linguistic capability, namely the understanding of negation. We borrow techniques from the analysis of language models to investigate the ability of pretrained vision-and-language models to handle negation. We find that these models severely underperform in the presence of negation.
Original language | English |
---|---|
Title of host publication | Proceedings of the 4th BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP |
Editors | Jasmijn Bastings, Yonatan Belinkov, Emmanuel Dupoux, Mario Giulianelli, Dieuwke Hupkes, Yuval Pinter, Hassan Sajjad |
Place of Publication | Stroudsburg, PA, USA |
Publisher | ACL Anthology |
Pages | 350-362 |
Number of pages | 13 |
ISBN (Print) | 978-1-955917-06-3 |
DOIs | |
Publication status | Published - 11 Nov 2021 |
Event | BlackboxNLP 2021: Analyzing and interpreting neural networks for NLP - Virtual Duration: 11 Nov 2021 → 11 Nov 2021 https://blackboxnlp.github.io/ |
Conference
Conference | BlackboxNLP 2021 |
---|---|
Period | 11/11/21 → 11/11/21 |
Internet address |