Edinburgh Research Explorer

Have you forgotten? A method to assess if machine learning models have forgotten data

Research output: Chapter in Book/Report/Conference proceedingChapter (peer-reviewed)peer-review

Related Edinburgh Organisations

Original languageEnglish
Title of host publicationMedical Image Computing and Computer Assisted Intervention – MICCAI 2020
Subtitle of host publication23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part I
ISBN (Electronic)978-3-030-59710-8
ISBN (Print)978-3-030-59709-2
Publication statusE-pub ahead of print - 29 Sep 2020
Event23rd International Conference on Medical Image Computing and Computer Assisted Intervention - Lima, Peru
Duration: 4 Oct 20208 Oct 2020
Conference number: 23

Publication series

NameLecture Notes in Computer Science
PublisherSpringer Verlag
ISSN (Electronic)0302-9743


Conference23rd International Conference on Medical Image Computing and Computer Assisted Intervention
Abbreviated titleMICCAI 2020
Internet address


In the era of deep learning, aggregation of data from several sources is a common approach to ensuring data diversity. Let us consider a scenario where several providers contribute data to a consortium for the joint development of a classication model (hereafter the target model), but, now one of the providers decides to leave. This provider requests that their data (hereafter the query dataset) be removed from the databases but also that the model `forgets' their data. In this paper, for the first time, we want to address the challenging question of whether data have been forgotten by a model. We assume knowledge of the query dataset and the distribution of a model's output. We establish statistical methods that compare the target's outputs with outputs of models trained with diifferent datasets. We evaluate our approach on several benchmark datasets (MNIST, CIFAR-10 and SVHN) and on a cardiac pathology
diagnosis task using data from the Automated Cardiac Diagnosis Challenge
(ACDC). We hope to encourage studies on what information a model retains and inspire extensions in more complex settings.

ID: 163712808