Generative Ratio Matching Networks

Akash Srivastava, Kai Xu, Michael Gutmann, Charles Sutton

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Deep generative models can learn to generate realistic-looking images, but many of the most effective methods are adversarial and involve a saddlepoint optimization, which requires a careful balancing of training between a generator network and a critic network. Maximum mean discrepancy networks (MMD-nets) avoid this issue by using kernel as a fixed adversary, but unfortunately, they have not on their own been able to match the generative quality of adversarial training. In this work, we take their insight of using kernels as fixed adversaries further and present a novel method for training deep generative models that does not involve saddlepoint optimization. We call our method generative ratio matching or GRAM for short. In GRAM, the generator and the critic networks do not play a zero-sum game against each other, instead they do so against a fixed kernel. Thus GRAM networks are not only stable to train like MMD-nets but they also match and beat the generative quality of adversarially trained generative networks.
Original languageEnglish
Title of host publicationProceedings of the International Conference on Learning Representations 2020
Pages1-18
Number of pages18
Publication statusPublished - 30 Apr 2020
EventEighth International Conference on Learning Representations - Millennium Hall, Virtual conference formerly Addis Ababa, Ethiopia
Duration: 26 Apr 202030 Apr 2020
https://iclr.cc/Conferences/2020

Conference

ConferenceEighth International Conference on Learning Representations
Abbreviated titleICLR 2020
CountryEthiopia
CityVirtual conference formerly Addis Ababa
Period26/04/2030/04/20
Internet address

Cite this