Abstract
In human interactions, trust is regularly updated during a discussion. For example, if someone is caught lying, any further utterances they make will be discounted, until trust is regained. This paper seeks to model such behaviour by introducing a dialogue game which operates over several iterations, with trust updates occurring at the end of each iteration. In turn, trust changes are computed based on intuitive properties, captured through three rules. By representing agent knowledge within a preference-based argumentation framework, we demonstrate how trust can change over the course of a dialogue.
Original language | English |
---|---|
Title of host publication | Theory and Applications of Formal Argumentation |
Editors | Elizabeth Black, Sanjay Modgil, Nir Oren |
Place of Publication | Cham |
Publisher | Springer |
Pages | 211-226 |
Number of pages | 16 |
ISBN (Electronic) | 978-3-319-75553-3 |
ISBN (Print) | 978-3-319-75552-6 |
DOIs | |
Publication status | Published - 6 Mar 2018 |
Event | The 2017 International Workshop on Theory and Applications of Formal Argument - Melbourne, Australia Duration: 19 Aug 2017 → 20 Aug 2017 https://homepages.abdn.ac.uk/n.oren/pages/TAFA-17/index.html |
Publication series
Name | Lecture Notes in Computer Science |
---|---|
Publisher | Springer, Cham |
Volume | 10757 |
ISSN (Print) | 0302-9743 |
ISSN (Electronic) | 1611-3349 |
Workshop
Workshop | The 2017 International Workshop on Theory and Applications of Formal Argument |
---|---|
Abbreviated title | TAFA 2017 |
Country/Territory | Australia |
City | Melbourne |
Period | 19/08/17 → 20/08/17 |
Internet address |