Analysing the residual stream of language models under knowledge conflicts

Yu Zhao, Xiaotang Du, Giwon Hong, Aryo Pradipta Gema, Alessio Devoto, Hongru Wang, Xuanli He, Kam-Fai Wong, Pasquale Minervini

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Large language models (LLMs) can store a significant amount of factual knowledge in their parameters. However, their parametric knowledge may conflict with the information provided in the context. Such conflicts can lead to undesirable model behaviour, such as reliance on outdated or incorrect information. In this work, we investigate whether LLMs can identify knowledge conflicts and whether it is possible to know which source of knowledge the model will rely on by analysing the residual stream of the LLM. Through probing tasks, we find that LLMs can internally register the signal of knowledge conflict in the residual stream, which can be accurately detected by probing the intermediate model activations. This allows us to detect conflicts within the residual stream before generating the answers without modifying the input or model parameters. Moreover, we find that the residual stream shows significantly different patterns when the model relies on contextual knowledge versus parametric knowledge to resolve conflicts. This pattern can be employed to estimate the behaviour of LLMs when conflict happens and prevent unexpected answers before producing the answers. Our analysis offers insights into how LLMs internally manage knowledge conflicts and provides a foundation for developing methods to control the knowledge selection processes.
Original languageEnglish
Title of host publicationProceedings of the 38th Conference on Neural Information Processing Systems
PublisherNeural Information Processing Systems Foundation (NeurIPS)
Pages1-12
Number of pages12
Volume37
DOIs
Publication statusAccepted/In press - 9 Oct 2024
EventNeurIPS 2024 - Workshop on Foundation Model Interventions - Vancouver Convention Center, Vancouver, Canada
Duration: 15 Dec 202415 Dec 2024
https://sites.google.com/view/mint-2024

Workshop

WorkshopNeurIPS 2024 - Workshop on Foundation Model Interventions
Abbreviated titleMINT 2024
Country/TerritoryCanada
CityVancouver
Period15/12/2415/12/24
Internet address

Keywords / Materials (for Non-textual outputs)

  • computation and language

Fingerprint

Dive into the research topics of 'Analysing the residual stream of language models under knowledge conflicts'. Together they form a unique fingerprint.

Cite this