To Compress Or Not To Compress: Understanding The Interactions Between Adversarial Attacks And Neural Network Compression

Ilia Shumailov, Yiren Zhao, Robert Mullins, Ross Anderson

Research output: Contribution to conferencePaperpeer-review

Abstract / Description of output

As deep neural networks (DNNs) become widely used, pruned and quantised models are becoming ubiquitous on edge devices; such compressed DNNs lower the computational requirements. Meanwhile, multiple recent studies show ways of constructing adversarial samples that make DNNs misclassify. We therefore investigate the extent to which adversarial samples are transferable between uncompressed and compressed DNNs. We find that such samples remain transferable for both pruned and quantised models. For pruning, adversarial samples at high sparsities are marginally less transferable. For quantisation, we find the transferability of adversarial samples is highly sensitive to integer precision.
Original languageEnglish
Pages230-240
Number of pages11
Publication statusPublished - 31 Mar 2019
EventThe Conference on Systems and Machine Learning (SysML) 2019 - https://mlsys.org/Conferences/2019/index.html, Stanford, United States
Duration: 31 Mar 20192 Apr 2019

Conference

ConferenceThe Conference on Systems and Machine Learning (SysML) 2019
Abbreviated titleSYSML 2019
Country/TerritoryUnited States
CityStanford
Period31/03/192/04/19

Fingerprint

Dive into the research topics of 'To Compress Or Not To Compress: Understanding The Interactions Between Adversarial Attacks And Neural Network Compression'. Together they form a unique fingerprint.

Cite this