Learning the Effects of Physical Actions in a Multi-modal Environment

Gautier Dagan, Frank Keller, Alex Lascarides

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Large Language Models (LLMs) handle physical commonsense information inadequately. As a result of being trained in a disembodied setting, LLMs often fail to predict an action’s outcome in a given environment. However, predicting the effects of an action before it is executed is crucial in planning, where coherent sequences of actions are often needed to achieve a goal. Therefore, we introduce the multi-modal task of predicting the outcomes of actions solely from realistic sensory inputs (images and text). Next, we extend an LLM to model latent representations of objects to better predict action outcomes in an environment. We show that multi-modal models can capture physical commonsense when augmented with visual information. Finally, we evaluate our model’s performance on novel actions and objects and find that combining modalities help models to generalize and learn physical commonsense reasoning better.
Original languageEnglish
Title of host publicationProceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2023)
EditorsAndreas Vlachos, Isabelle Augenstein
PublisherAssociation for Computational Linguistics
Pages133-148
Number of pages16
Publication statusPublished - 2 May 2023
EventThe 17th Conference of the European Chapter of the Association for Computational Linguistics - Valamar Lacroma, Dubrovnik, Croatia
Duration: 2 May 20236 May 2023
Conference number: 17
https://2023.eacl.org/

Conference

ConferenceThe 17th Conference of the European Chapter of the Association for Computational Linguistics
Abbreviated titleEACL 2023
Country/TerritoryCroatia
CityDubrovnik
Period2/05/236/05/23
Internet address

Fingerprint

Dive into the research topics of 'Learning the Effects of Physical Actions in a Multi-modal Environment'. Together they form a unique fingerprint.

Cite this