An overview of using large language models for the symbol grounding task in ABC repair system

Pak Yin Chan*, Xue Li, Alan Bundy

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

The ABC Theory Repair System (ABC) has demonstrated its success in facilitating users to repair faulty theories utilizing distinct techniques. Yet, comprehending ABC-repaired theories becomes more challenging due to the presence of dummy constants or predicates introduced by ABC. In this paper, we propose a grounding system by incorporating Large Language Models (LLMs) to provide these dummy items with meaningful names. By applying ABC and grounding alternately, the resulting theory is both fault-free and semantically meaningful. Moreover, our study shows that LLMs without fine-tuning still exhibit capabilities of common knowledge, and their grounding performances are enhanced by providing sufficient background or asking for more returns.
Original languageEnglish
Title of host publicationCognitive AI 2023
PublisherCEUR-WS
Publication statusAccepted/In press - 25 Aug 2023
EventCognitive AI 2023 - Bari, Italy
Duration: 13 Nov 202315 Nov 2023
Conference number: 1
https://cognitive-ai.netlify.app/

Publication series

NameCEUR Workshop Proceedings
PublisherCEUR
ISSN (Electronic)1613-0073

Workshop

WorkshopCognitive AI 2023
Abbreviated titleCogAI 2023
Country/TerritoryItaly
CityBari
Period13/11/2315/11/23
Internet address

Keywords / Materials (for Non-textual outputs)

  • Large language model
  • Closed-book question answering
  • Faulty logical theory repair
  • Automated theorem proving

Fingerprint

Dive into the research topics of 'An overview of using large language models for the symbol grounding task in ABC repair system'. Together they form a unique fingerprint.

Cite this