Abstract / Description of output
Sociotechnical systems (STSs) combine people and machines to take actions. Artificial intelligence (AI) enables STS to make increasingly autonomous decisions that impact human lives. Their reasoning processes still often remain unclear to people interacting with such systems, which may also harm people by making unjust decisions. There are no efficient means for people to challenge automated decisions and obtain proper restitution if necessary. On the other hand, organizations may be willing to provide more transparency about their decision-making process, but answering each of the questions people ask could be cumbersome. It is also not always clear who is qualified and accountable to answer to the people harmed by autonomous decisions. We argue that investigating the expectations of stakeholders is essential to create an efficient answerability framework. We propose a mediator agent that will bridge the gap between organizations that employ AI and people who were harmed by the automated decisions. Our approach helps the organizations to implement more answerable AI practices, and it also empowers people to ask for clarifications, request updates on actions as well as remedies through dialogues.
Original language | English |
---|---|
Pages | 1-6 |
Number of pages | 6 |
Publication status | Published - 15 Oct 2023 |
Event | The 26th ACM Conference on Computer-Supported Cooperative Work and Social Computing - Hyatt Regency Minneapolis Hotel, Minneapolis, United States Duration: 14 Oct 2023 → 18 Oct 2023 Conference number: 26 https://cscw.acm.org/2023/ |
Conference
Conference | The 26th ACM Conference on Computer-Supported Cooperative Work and Social Computing |
---|---|
Abbreviated title | CSCW 2023 |
Country/Territory | United States |
City | Minneapolis |
Period | 14/10/23 → 18/10/23 |
Internet address |
Keywords / Materials (for Non-textual outputs)
- artifical intelligence
- responsible AI
- AI harms
- answerability
- sociotechnical systems