Abstract
Reciprocity is a key determinant of human behavior and has been well documented in the psychological and behavioral economics literature. This paper shows that reciprocity has significant implications for computer agents that interact with people over time. It proposes a model for predicting people’s actions in multiple bilateral rounds of interactions. The model represents reciprocity as a tradeoff between two social factors: the extent to which players reward and retaliate others’ past actions (retrospective reasoning), and their estimate about the future ramifications of their actions (prospective reasoning). The model is trained and evaluated over a series of negotiation rounds that vary players’ possible strategies as well as their benefit from potential strategies at each round. Results show that reasoning about reciprocal behavior significantly improves the predictive power of the model, enabling it to outperform alternative models that do not reason about reciprocity, or that play various game theoretic equilibria. These results indicate that computers that interact with people need to represent and to learn the social factors that affect people’s play when they interact over time.
Original language | English |
---|---|
Title of host publication | Twenty-Second Conference on Artificial Intelligence (AAAI-07) |
Place of Publication | ancouver, British Columbia, Canada |
Publisher | AAAI Press |
Pages | 815-821 |
Number of pages | 6 |
Volume | 22 |
Publication status | Published - 2007 |
Event | 22nd national conference on Artificial intelligence - Vancouver, Canada Duration: 22 Jul 2007 → 26 Jul 2007 https://www.aaai.org/Conferences/AAAI/aaai07.php |
Conference
Conference | 22nd national conference on Artificial intelligence |
---|---|
Abbreviated title | AAAI-07 |
Country/Territory | Canada |
City | Vancouver |
Period | 22/07/07 → 26/07/07 |
Internet address |