Using trust for detecting deceitful agents in artificial societies

Michael Schillo, Petra Funk, Michael Rovatsos

Research output: Contribution to journalArticlepeer-review

Abstract

Trust is one of the most important concepts guiding decision-making and contracting in human societies. In artificial societies, this concept has been neglected until recently. The inherent benevolence assumption implemented in many multiagent systems can have hazardous consequences when dealing with deceit in open systems. The aim of this paper is to establish a mechanism that helps agents to cope with environments inhabited by both selfish and cooperative entities. This is achieved by enabling agents to evaluate trust in others. A formalization and an algorithm for trust are presented so that agents can autonomously deal with deception and identify trustworthy parties in open systems. The approach is twofold: agents can observe the behavior of others and thus collect information for establishing an initial trust model. In order to adapt quickly to a new or rapidly changing environment, one enables agents to also make use of observations from other agents. The practical relevance of these ideas is demonstrated by means of a direct mapping from a scenario to electronic commerce.
Original languageEnglish
Pages (from-to)825-848
Number of pages24
JournalApplied Artificial Intelligence
Volume14
Issue number8
DOIs
Publication statusPublished - 2000

Fingerprint

Dive into the research topics of 'Using trust for detecting deceitful agents in artificial societies'. Together they form a unique fingerprint.

Cite this