Edinburgh Research Explorer

Tractable Probabilistic Models for Moral Responsibility

Research output: Contribution to conferencePaper

Original languageEnglish
Number of pages18
Publication statusPublished - 13 Dec 2019
EventKnowledge Representation & Reasoning Meets Machine Learning: Workshop at NeurIPS'19 - Vancouver, Canada
Duration: 13 Dec 201913 Dec 2019
https://kr2ml.github.io/2019/

Workshop

WorkshopKnowledge Representation & Reasoning Meets Machine Learning
Abbreviated titleKR2ML
CountryCanada
CityVancouver
Period13/12/1913/12/19
Internet address

Abstract

Moral responsibility is a major concern in autonomous systems, with applications ranging from self-driving cars to kidney exchanges. Although there have been recent attempts to formalise responsibility and Blame, among similar notions, the problem of learning within these formalisms has been unaddressed. From the viewpoint of such systems, the urgent questions are: (a) How can models of moral scenarios and Blameworthiness be extracted and learnt automatically from data? (b) How can judgements be computed effectively and efficiently, given the split-second decision points faced by some systems? By building on constrained tractable probabilistic learning, we propose a learning framework for inducing models of such scenarios automatically from data and reasoning tractably from them. We report on experiments that compare our system with human judgement in three domains: lung cancer staging, teamwork management, and trolley problems.

Event

ID: 116809222