ThinkSum: Probabilistic reasoning over sets using large language models

Batu Ozturkler, Nikolay Malkin, Zhen Wang, Nebojsa Jojic

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Large language models (LLMs) have a substantial capacity for high-level analogical reasoning: reproducing patterns in linear text that occur in their training data (zero-shot evaluation) or in the provided context (few-shot in-context learning). However, recent studies show that even the more advanced LLMs fail in scenarios that require reasoning over multiple objects or facts and making sequences of logical deductions. We propose a two-stage probabilistic inference paradigm, ThinkSum, which reasons over sets of objects or facts in a structured manner. In the first stage (Think - retrieval of associations), a LLM is queried in parallel over a set of phrases extracted from the prompt or an auxiliary model call. In the second stage (Sum - probabilistic inference or reasoning), the results of these queries are aggregated to make the final prediction. We demonstrate the possibilities and advantages of ThinkSum on the BIG-bench suite of LLM evaluation tasks, achieving improvements over the state of the art using GPT-family models on thirteen difficult tasks, often with far smaller model variants. We also compare and contrast ThinkSum with other proposed modifications to direct prompting of LLMs, such as variants of chain-of-thought prompting. Our results suggest that because the probabilistic inference in ThinkSum is performed outside of calls to the LLM, ThinkSum is less sensitive to prompt design, yields more interpretable predictions, and can be flexibly combined with latent variable models to extract structured knowledge from LLMs. Overall, our proposed paradigm represents a promising approach for enhancing the reasoning capabilities of LLMs.
Original languageEnglish
Title of host publicationProceedings of the 61st Annual Meeting of the Association for Computational Linguistics
PublisherAssociation for Computational Linguistics
Pages1216-1239
Number of pages24
Volume1
ISBN (Electronic)9781959429722
DOIs
Publication statusPublished - 14 Jul 2023
EventThe 61st Annual Meeting of the Association for Computational Linguistics - Westin Harbour Castle, Toronto, Canada
Duration: 9 Jul 202314 Jul 2023
Conference number: 61
https://2023.aclweb.org/

Publication series

NameProceedings of the Annual Meeting of the Association for Computational Linguistics
PublisherAssociation for Computational Linguistics
Volume1
ISSN (Print)0736-587X

Conference

ConferenceThe 61st Annual Meeting of the Association for Computational Linguistics
Abbreviated titleACL 2023
Country/TerritoryCanada
CityToronto
Period9/07/2314/07/23
Internet address

Fingerprint

Dive into the research topics of 'ThinkSum: Probabilistic reasoning over sets using large language models'. Together they form a unique fingerprint.

Cite this