Trust and matching algorithms for selecting suitable agents

Nardine Osman, Carles Sierra, Fiona McNeill, Juan Pane, John Debenham

Research output: Contribution to journalArticlepeer-review


This article addresses the problem of finding suitable agents to collaborate with for a given interaction in distributed open systems, such as multiagent and P2P systems. The agent in question is given the chance to describe its confidence in its own capabilities. However, since agents may be malicious, misinformed, suffer from miscommunication, and so on, one also needs to calculate how much trusted is that agent. This article proposes a novel trust model that calculates the expectation about an agent's future performance in a given context by assessing both the agent's willingness and capability through the semantic comparison of the current context in question with the agent's performance in past similar experiences. The proposed mechanism for assessing trust may be applied to any real world application where past commitments are recorded and observations are made that assess these commitments, and the model can then calculate one's trust in another with respect to a future commitment by assessing the other's past performance.
Original languageEnglish
Article number16
Pages (from-to)1-39
Number of pages39
JournalACM Transactions on Intelligent Systems and Technology
Issue number1
Publication statusPublished - 31 Dec 2013


  • Algorithms
  • Semantic matching
  • trust and reputation


Dive into the research topics of 'Trust and matching algorithms for selecting suitable agents'. Together they form a unique fingerprint.

Cite this