Survey: Leakage and Privacy at Inference Time

Marija Jegorova, Chaitanya Kaul, Charlie Mayor, Alison Q. O'Neil, Alexander Weir, Roderick Murray-Smith, Sotirios A. Tsaftaris

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

Leakage of data from publicly available Machine Learning (ML) models is an area of growing significance since commercial and government applications of ML can draw on multiple sources of data, potentially including users' and clients' sensitive data. We provide a comprehensive survey of contemporary advances on several fronts, covering involuntary data leakage which is natural to ML models, potential malicious leakage which is caused by privacy attacks, and currently available defence mechanisms. We focus on inference-time leakage, as the most likely scenario for publicly available models. We first discuss what leakage is in the context of different data, tasks, and model architectures. We then propose a taxonomy across involuntary and malicious leakage, followed by description of currently available defences, assessment metrics, and applications. We conclude with outstanding challenges and open questions, outlining some promising directions for future research.

Original languageEnglish
Pages (from-to)1-20
Number of pages20
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Early online date15 Dec 2022
DOIs
Publication statusE-pub ahead of print - 15 Dec 2022

Keywords / Materials (for Non-textual outputs)

  • Adversarial Defences
  • Computational modeling
  • Data Anonymization
  • Data Leakage
  • Data models
  • Data privacy
  • Feature Leakage
  • Glass box
  • Inference-Time Attacks
  • Machine Unlearning
  • Membership Inference
  • Privacy
  • Privacy Attacks and Defences
  • Property Inference
  • Task analysis
  • Training
  • Training data
  • Verifying Forgetting

Fingerprint

Dive into the research topics of 'Survey: Leakage and Privacy at Inference Time'. Together they form a unique fingerprint.

Cite this