Projects per year
This article deals with adversarial attacks towards deep learning systems for Natural Language Processing (NLP), in the context of privacy protection. We study a specific type of attack: an attacker eavesdrops on the hidden representations of a neural text classifier and tries to recover information about the input text. Such scenario may arise in situations when the computation of a neural network is shared across multiple devices, e.g. some hidden representation is computed by a user’s device and sent to a cloud-based model. We measure the privacy of a hidden representation by the ability of an attacker to predict accurately specific private information from it and characterize the tradeoff between the privacy and the utility of neural representations. Finally, we propose several defense methods based on modified training objectives and show that they improve the privacy of neural representations.
|Title of host publication||Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing|
|Place of Publication||Brussels, Belgium|
|Publisher||Association for Computational Linguistics|
|Number of pages||10|
|Publication status||Published - Nov 2018|
|Event||2018 Conference on Empirical Methods in Natural Language Processing - Square Meeting Center, Brussels, Belgium|
Duration: 31 Oct 2018 → 4 Nov 2018
|Conference||2018 Conference on Empirical Methods in Natural Language Processing|
|Abbreviated title||EMNLP 2018|
|Period||31/10/18 → 4/11/18|
FingerprintDive into the research topics of 'Privacy-preserving Neural Representations of Text'. Together they form a unique fingerprint.
- 1 Finished
1/02/16 → 31/01/19