Knowledge representation and acquisition in the era of large language models: Reflections on learning to reason via PAC-Semantics

Ionela Mocanu, Vaishak Belle*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

Human beings are known for their remarkable ability to comprehend, analyse, and interpret common sense knowledge. This ability is critical for exhibiting intelligent behaviour, often defined as a mapping from beliefs to actions, which has led to attempts to formalize and capture explicit representations in the form of databases, knowledge bases, and ontologies in AI agents.

But in the era of large language models (LLMs), this emphasis might seem unnecessary. After all, these models already capture the extent of human knowledge and can infer appropriate things from it (presumably) as per some innate logical rules. The question then is whether they can also be trained to perform mathematical computations.

Although the consensus on the reliability of such models is still being studied, early results do seem to suggest they do not offer logically and mathematically consistent results. In this short summary article, we articulate the motivations for still caring about logical/symbolic artefacts and representations, and report on recent progress in learning to reason via the so-called probably approximately correct (PAC)-semantics.
Original languageEnglish
Article number100036
Pages (from-to)1-7
JournalNatural Language Processing Journal
Volume5
DOIs
Publication statusPublished - 31 Oct 2023

Keywords / Materials (for Non-textual outputs)

  • Pac-semantics
  • logical knowledge bases
  • knowledge acquisition

Fingerprint

Dive into the research topics of 'Knowledge representation and acquisition in the era of large language models: Reflections on learning to reason via PAC-Semantics'. Together they form a unique fingerprint.

Cite this