Dataset Condensation with Gradient Matching

Bo Zhao, Konda Reddy Mopuri, Hakan Bilen

Research output: Chapter in Book/Report/Conference proceedingConference contribution


As the state-of-the-art machine learning methods in many fields rely on larger datasets, storing datasets and training models on them become significantly more expensive. This paper proposes a training set synthesis technique for data-efficient learning, called Dataset Condensation, that learns to condense large dataset into a small set of informative synthetic samples for training deep neural networks from scratch. We formulate this goal as a gradient matching problem between the gradients of deep neural network weights that are trained on the original and our synthetic data. We rigorously evaluate its performance in several computer vision benchmarks and demonstrate that it significantly outperforms the state-of-the-art methods. Finally we explore the use of our method in continual learning and neural architecture search and report promising gains when limited memory and computations are available.
Original languageEnglish
Title of host publicationInternational Conference on Learning Representations (ICLR 2021)
Number of pages20
Publication statusE-pub ahead of print - 29 Mar 2021
EventNinth International Conference on Learning Representations 2021 - Virtual Conference
Duration: 4 May 20217 May 2021


ConferenceNinth International Conference on Learning Representations 2021
Abbreviated titleICLR 2021
CityVirtual Conference
Internet address

Fingerprint Dive into the research topics of 'Dataset Condensation with Gradient Matching'. Together they form a unique fingerprint.

Cite this