An empirical evaluation of adversarial robustness under transfer learning

Todor Davchev, Timos Korres, Stathi Fotiadis, Nick Antonopoulos, Subramanian Ramamoorthy

Research output: Contribution to conferencePaperpeer-review

Abstract / Description of output

In this work, we evaluate adversarial robustness in the context of transfer learning from a source trained on CIFAR 100 to a target network trained on CIFAR 10. Specifically, we study the effects of using robust optimisation in the source and target networks. This allows us to identify transfer learning strategies under which adversarial defences are successfully retained, in addition to revealing potential vulnerabilities. We study the extent to which features learnt by a fast gradient sign method (FGSM) and its iterative alternative (PGD) can preserve their defence properties against black and white-box attacks under three different transfer learning strategies. We find that using PGD examples during training on the source task leads to more general robust features that are easier to transfer. Furthermore, under successful transfer, it achieves 5.2% more accuracy against white-box PGD attacks than suitable baselines. Overall, our empirical evaluations give insights on how well adversarial robustness under transfer learning can generalise.
Original languageEnglish
Number of pages8
Publication statusE-pub ahead of print - 14 Jun 2019
EventICML 2019 Workshop on Understanding and Improving General-
ization in Deep Learning
- Grand Ballroom A, Long Beach, United States
Duration: 14 Jun 201914 Jun 2019
https://sites.google.com/view/icml2019-generalization/home

Workshop

WorkshopICML 2019 Workshop on Understanding and Improving General-
ization in Deep Learning
Abbreviated titleICML 2019 Workshop
Country/TerritoryUnited States
CityLong Beach
Period14/06/1914/06/19
Internet address

Fingerprint

Dive into the research topics of 'An empirical evaluation of adversarial robustness under transfer learning'. Together they form a unique fingerprint.

Cite this