Animate, or inanimate, that is the question for large language models

Leonardo Ranaldi, Giulia Pucci, Fabio Massimo Zanzotto

Research output: Working paperPreprint

Abstract / Description of output

The cognitive essence of humans is deeply intertwined with the concept of animacy, which plays an essential role in shaping their memory, vision, and multi-layered language understanding. Although animacy appears in language via nuanced constraints on verbs and adjectives, it is also learned and refined through extralinguistic information. Similarly, we assume that the LLMs' limited abilities to understand natural language when processing animacy are motivated by the fact that these models are trained exclusively on text. Hence, the question this paper aims to answer arises: can LLMs, in their digital wisdom, process animacy in a similar way to what humans would do? We then propose a systematic analysis via prompting approaches. In particular, we probe different LLMs by prompting them using animate, inanimate, usual, and stranger contexts. Results reveal that, although LLMs have been trained predominantly on textual data, they exhibit human-like behavior when faced with typical animate and inanimate entities in alignment with earlier studies. Hence, LLMs can adapt to understand unconventional situations by recognizing oddities as animated without needing to interface with unspoken cognitive triggers humans rely on to break down animations.
Original languageEnglish
PublisherArXiv
DOIs
Publication statusPublished - 12 Aug 2024

Fingerprint

Dive into the research topics of 'Animate, or inanimate, that is the question for large language models'. Together they form a unique fingerprint.

Cite this