Counterfactual explanation at will, with zero privacy leakage

Shuai An, Yang Cao

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

While counterfactuals have been extensively studied as an intuitive explanation of model predictions, they still have limited adoption in practice due to two obstacles: (a) They rely on excessive access to the model for explanation that the model owner may not provide; and (b) counterfactuals carry information that adversarial users can exploit to launch model extraction attacks. To address the challenges, we propose CPC, a data-driven approach to counterfactual. CPC works at the client side and gives full control and right-to-explain to model users, even when model owners opt not to. Moreover, CPC warrants that adversarial users cannot exploit counterfactuals to extract models. We formulate properties and fundamental problems underlying CPC, study their complexity and develop effective algorithms. Using real-world datasets and user study, we verify that CPC does prevent adversaries from exploiting counterfactuals for model extraction attacks, and is orders of magnitude faster than existing explainers, while maintaining comparable and often higher quality.
Original languageEnglish
Article number130
Pages (from-to)1-29
Number of pages29
JournalProceedings of the ACM on Management of Data
Issue number3
Publication statusPublished - 30 May 2024

Keywords / Materials (for Non-textual outputs)

  • in-database explanation
  • database for explainable machine learning
  • model explainability
  • counterfactual
  • privacy
  • model extraction


Dive into the research topics of 'Counterfactual explanation at will, with zero privacy leakage'. Together they form a unique fingerprint.

Cite this