Optimising Network Architectures for Provable Adversarial Robustness

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Existing Lipschitz-based provable defences to adversarial examples only cover the l2 threat model. We introduce the first bound that makes use of Lipschitz continuity to provide a more general guarantee for threat models based on any lp norm. Additionally, a new strategy is proposed for designing network architectures that exhibit superior provable adversarial robustness over conventional convolutional neural networks. Experiments are conducted to validate our theoretical contributions, show that the assumptions made during the design of our novel architecture hold in practice, and quantify the empirical robustness of several Lipschitz-based adversarial defence methods.
Original languageEnglish
Title of host publication2020 Sensor Signal Processing for Defence Conference (SSPD)
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Pages1-5
Number of pages5
ISBN (Electronic)978-1-7281-3810-7
ISBN (Print)978-1-7281-3811-4
DOIs
Publication statusPublished - 30 Nov 2020
Event9th International Conference of the Sensor Signal Processing for Defence - Virtual Conference, United Kingdom
Duration: 15 Sep 202016 Sep 2020
https://sspd.eng.ed.ac.uk/

Conference

Conference9th International Conference of the Sensor Signal Processing for Defence
Abbreviated titleSSPD 2020
Country/TerritoryUnited Kingdom
CityVirtual Conference
Period15/09/2016/09/20
Internet address

Keywords

  • Artificial Neural Network
  • Computer Vision

Fingerprint

Dive into the research topics of 'Optimising Network Architectures for Provable Adversarial Robustness'. Together they form a unique fingerprint.

Cite this