Weight-covariance alignment for adversarially robust neural networks

Panagiotis Eustratiadis, Henry Gouk, Da Li, Timothy Hospedales

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Stochastic Neural Networks (SNNs) that inject noise into their hidden layers have recently been shown to achieve strong robustness against adversarial attacks. However, existing SNNs are usually heuristically motivated, and often rely on adversarial training, which is computationally costly. We propose a new SNN that achieves state-of-the-art performance without relying on adversarial training, and enjoys solid theoretical justification. Specifically, while existing SNNs inject learned or hand-tuned isotropic noise, our SNN learns an anisotropic noise distribution to optimize a learning-theoretic bound on adversarial robustness. We evaluate our method on a number of popular benchmarks, show that it can be applied to different architectures, and that it provides robustness to a variety of white-box and black-box attacks, while being simple and fast to train compared to existing alternatives.
Original languageEnglish
Title of host publicationProceedings of the 38th International Conference on Machine Learning
EditorsMarina Meila, Tong Zhang
PublisherPMLR
Pages3047-3056
Number of pages10
Publication statusPublished - 18 Jul 2021
EventThirty-eighth International Conference on Machine Learning - Online
Duration: 18 Jul 202124 Jul 2021
https://icml.cc/

Publication series

NameProceedings of Machine Learning Research
PublisherPMLR
Volume139
ISSN (Electronic)2640-3498

Conference

ConferenceThirty-eighth International Conference on Machine Learning
Abbreviated titleICML 2021
Period18/07/2124/07/21
Internet address

Fingerprint

Dive into the research topics of 'Weight-covariance alignment for adversarially robust neural networks'. Together they form a unique fingerprint.

Cite this