Adaptive Activation Steering: A Tuning-Free LLM Truthfulness Improvement Method for Diverse Hallucinations Categories

Tianlong Wang, Xianfeng Jiao, Yinghao Zhu, Zhongzhi Chen, Yifan He, Xu Chu, Junyi Gao, Yasha Wang, Liantao Ma

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Recent studies have indicated that Large Language Models (LLMs) harbor an inherent understanding of truthfulness, yet often fail to consistently express it and generate false statements. This gap between ''knowing'' and ''telling'' poses a challenge for ensuring the truthfulness of generated content. Inspired by recent work on the practice of encoding human-interpretable concepts linearly within large language models, we treat truthfulness as a specially linearly encoded concept within LLMs, and introduce Adaptive Activation Steering (ACT), a tuning-free method that adaptively shifts LLM's activations in the ''truthful'' direction during inference. ACT addresses diverse categories of hallucinations by utilizing diverse truthfulness-related steering vectors and adjusting the steering intensity adaptively. Applied as an add-on across various models, ACT significantly improves truthfulness in LLaMA (↑142%), LLaMA2 (↑24%), Alpaca (↑36%), Vicuna (↑28%), LLaMA2-Chat (↑19%), and LLaMA3(↑34%). Furthermore, we verify ACT's scalability across larger models (13B, 33B, 65B), underscoring the adaptability of ACT to large-scale language models. Our code is available at https://github.com/tianlwang/ACT.
Original languageEnglish
Title of host publicationWWW '25: Proceedings of the ACM on Web Conference 2025
PublisherACM Association for Computing Machinery
Pages2562-2578
Number of pages17
ISBN (Electronic)979-8-4007-1274-6
DOIs
Publication statusPublished - 22 Apr 2025
EventThe ACM Web Conference 2025 - ICC Sydney: International Convention & Exhibition Centre, Sydney, Australia
Duration: 28 Apr 20252 May 2025
https://www2025.thewebconf.org/

Conference

ConferenceThe ACM Web Conference 2025
Abbreviated titleWWW '25
Country/TerritoryAustralia
CitySydney
Period28/04/252/05/25
Internet address

Keywords / Materials (for Non-textual outputs)

  • large language model
  • hallucination
  • tuning-free

Fingerprint

Dive into the research topics of 'Adaptive Activation Steering: A Tuning-Free LLM Truthfulness Improvement Method for Diverse Hallucinations Categories'. Together they form a unique fingerprint.

Cite this