Neural networks memorise personal information from one sample

John Hartley, Pedro Sanchez, Fasih Haider, Sotirios A. Tsaftaris

Research output: Contribution to journalArticlepeer-review

Abstract

Deep neural networks (DNNs) have achieved high accuracy in diagnosing multiple diseases/conditions at a large scale. However, a number of concerns have been raised about safeguarding data privacy and algorithmic bias of the neural network models. We demonstrate that unique features (UFs), such as names, IDs, or other patient information can be memorised (and eventually leaked) by neural networks even when it occurs on a single training data sample within the dataset. We explain this memorisation phenomenon by showing that it is more likely to occur when UFs are an instance of a rare concept. We propose methods to identify whether a given model does or does not memorise a given (known) feature. Importantly, our method does not require access to the training data and therefore can be deployed by an external entity. We conclude that memorisation does have implications on model robustness, but it can also pose a risk to the privacy of patients who consent to the use of their data for training models.
Original languageEnglish
Article number21366
Pages (from-to)21366
JournalScientific Reports
Volume13
Issue number1
Early online date4 Dec 2023
DOIs
Publication statusPublished - Dec 2023

Keywords / Materials (for Non-textual outputs)

  • Humans
  • Neural Networks, Computer
  • Privacy

Fingerprint

Dive into the research topics of 'Neural networks memorise personal information from one sample'. Together they form a unique fingerprint.

Cite this