Edinburgh Research Explorer

Counterfactual Explanation and Causal in Service of Robustness in Robot Control

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Related Edinburgh Organisations

Open Access permissions

Open

Documents

Original languageEnglish
Title of host publicationProceedings of the IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob 2020)
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Number of pages8
Publication statusAccepted/In press - 30 Jul 2020
Event10th Joint IEEE International Conference on Development and Learning and Epigenetic Robotics 2020 - Virtual conference, Chile
Duration: 28 Oct 202030 Oct 2020
https://cdstc.gitlab.io/icdl-2020/

Conference

Conference10th Joint IEEE International Conference on Development and Learning and Epigenetic Robotics 2020
Abbreviated titleICDL-EpiRob 2020
CountryChile
CityVirtual conference
Period28/10/2030/10/20
Internet address

Abstract

We propose an architecture for training generative models of counterfactual conditionals of the form, ‘can we modify event A to cause B instead of C?’, motivated by applications in robot control. Using an ‘adversarial training’ paradigm, an image-based deep neural network model is trained to produce small and realistic modifications to an original image in order to cause user-defined effects. These modifications can be used in the design process of image-based robust control - to determine the ability of the controller to return to a working regime by modifications in the input space, rather than by adaptation. In contrast to conventional control design approaches, where robustness is quantified in terms of the ability to reject noise, we explore the space of counterfactuals that might cause a certain requirement to be violated, thus proposing an alternative model that might be more expressive in certain robotics applications. So, we propose the generation of counterfactuals as an approach to explanation of black-box models and the envisioning of potential movement paths in autonomous robotic control. Firstly, we demonstrate this approach in a set of classification tasks, using the well known MNIST and CelebFaces Attributes datasets. Then, addressing multi-dimensional regression, we demonstrate our approach in a reaching task with a physical robot, and in a navigation task with a robot in a digital twin simulation.

    Research areas

  • Counterfactual conditionals, Causal inference, model explainability, state envisioning, controller robustness

Event

ID: 173557495