Manipulating SGD with Data Ordering Attacks

Ilia Shumailov, Zakhar Shumaylov, Dmitry Kazhdan, Yiren Zhao, Nicolas Papernot, Murat A. Erdogdu, Ross Anderson

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Machine learning is vulnerable to a wide variety of attacks. It is now well understood that by changing the underlying data distribution, an adversary can poison the model trained with it or introduce backdoors. In this paper we present a novel class of training-time attacks that require no changes to the underlying dataset or model architecture, but instead only change the order in which data are supplied to the model. In particular, we find that the attacker can either prevent the model from learning, or poison it to learn behaviours specified by the attacker. Furthermore, we find that even a single adversarially-ordered epoch can be enough to slow down model learning, or even to reset all of the learning progress. Indeed, the attacks presented here are not specific to the model or dataset, but rather target the stochastic nature of modern learning procedures. We extensively evaluate our attacks on computer vision and natural language benchmarks to find that the adversary can disrupt model training and even introduce backdoors.

Original languageEnglish
Title of host publicationAdvances in Neural Information Processing Systems 34 proceedings (NeurIPS 2021)
EditorsM. Ranzato, A. Beygelzimer , K. Nguyen, P. S. Liang, J.W. Vaughan, Y. Dauphin
PublisherNeural Information Processing Systems
Number of pages12
Publication statusPublished - 6 Dec 2021
EventThirty-fifth Conference on Neural Information Processing Systems - Virtual
Duration: 6 Dec 202114 Dec 2021

Publication series

NameAdvances in Neural Information Processing Systems
ISSN (Print)1049-5258


ConferenceThirty-fifth Conference on Neural Information Processing Systems
Abbreviated titleNeurIPS 2021
Internet address


Dive into the research topics of 'Manipulating SGD with Data Ordering Attacks'. Together they form a unique fingerprint.

Cite this