User tampering in reinforcement learning recommender systems

Atoosa Kasirzadeh, Charles Evans

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

In this paper, we introduce new formal methods and provide empirical evidence to highlight a unique safety concern prevalent in reinforcement learning (RL)-based recommendation algorithms – ’user tampering.’ User tampering is a situation where an RL-based recommender system may manipulate a media user’s opinions through its suggestions as part of a policy to maximize long-term user engagement. We use formal techniques from causal modeling to critically analyze prevailing solutions proposed in the literature for implementing scalable RL-based recommendation systems, and we observe that these methods do not adequately prevent user tampering. Moreover, we evaluate existing mitigation strategies for reward tampering issues, and show that these methods are insufficient in addressing the distinct phenomenon of user tampering within the context of recommendations. We further reinforce our findings with a simulation study of an RL-based recommendation system focused on the dissemination of political content. Our study shows that a Q-learning algorithm consistently learns to exploit its opportunities to polarize simulated users with its early recommendations in order to have more consistent success with subsequent recommendations that align with this induced polarization. Our findings emphasize the necessity for developing safer RL-based recommendation systems and suggest that achieving such safety would require a fundamental shift in the design away from the approaches we have seen in the recent literature.
Original languageEnglish
Title of host publicationAIES '23
Subtitle of host publicationProceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society
EditorsFrancesca Rossi, Sanmay Das, Jenny Davis, Kay Firth-Butterfield, Alex John
Place of PublicationNew York
PublisherAssociation for Computing Machinery (ACM)
Pages58–69
Number of pages12
ISBN (Electronic)9798400702310
DOIs
Publication statusPublished - 29 Aug 2023
EventSixth AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society - Palais des congrès de Montréal, Montréal, Canada
Duration: 8 Aug 202310 Aug 2023
https://www.aies-conference.com/2023/

Publication series

NameProceedings of the AAAI/ACM Conference on AI, Ethics, and Society
PublisherAssociation for Computing Machinery (ACM)
ISSN (Electronic)2168-4081

Conference

ConferenceSixth AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society
Abbreviated titleAIES 2023
Country/TerritoryCanada
CityMontréal
Period8/08/2310/08/23
Internet address

Keywords / Materials (for Non-textual outputs)

  • AI safety
  • AI ethics
  • recommendation systems
  • recommender systems
  • reinforcement learning
  • value alignment

Fingerprint

Dive into the research topics of 'User tampering in reinforcement learning recommender systems'. Together they form a unique fingerprint.

Cite this