This article addresses the problem of finding suitable agents to collaborate with for a given interaction in distributed open systems, such as multiagent and P2P systems. The agent in question is given the chance to describe its confidence in its own capabilities. However, since agents may be malicious, misinformed, suffer from miscommunication, and so on, one also needs to calculate how much trusted is that agent. This article proposes a novel trust model that calculates the expectation about an agent's future performance in a given context by assessing both the agent's willingness and capability through the semantic comparison of the current context in question with the agent's performance in past similar experiences. The proposed mechanism for assessing trust may be applied to any real world application where past commitments are recorded and observations are made that assess these commitments, and the model can then calculate one's trust in another with respect to a future commitment by assessing the other's past performance.
|Number of pages||39|
|Journal||ACM Transactions on Intelligent Systems and Technology|
|Publication status||Published - 31 Dec 2013|
- Semantic matching
- trust and reputation