Learning to Generate Novel Domains for Domain Generalization

Kaiyang Zhou, Yongxin Yang, Timothy Hospedales, Tao Xiang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper focuses on domain generalization (DG), the task of learning from multiple source domains a model that generalizes well to unseen domains. A main challenge for DG is that the available source domains often exhibit limited diversity, hampering the model's ability to learn to generalize. We therefore employ a data generator to synthesize data from pseudo-novel domains to augment the source domains. This explicitly increases the diversity of available training domains and leads to a more generalizable model. To train the generator, we model the distribution divergence between source and synthesized pseudo-novel domains using optimal transport, and maximize the divergence. To ensure that semantics are preserved in the synthesized data, we further impose cycle-consistency and classification losses on the generator. Our method, L2A-OT (Learning to Augment by Optimal Transport) outperforms current state-of-the-art DG methods on four benchmark datasets.
Original languageEnglish
Title of host publicationComputer Vision – ECCV 2020
PublisherSpringer, Cham
Pages561-578
Number of pages18
ISBN (Electronic)978-3-030-58517-4
ISBN (Print)978-3-030-58516-7
DOIs
Publication statusPublished - 10 Oct 2020
Event16th European Conference on Computer Vision - Virtual conference
Duration: 23 Aug 202028 Aug 2020
https://eccv2020.eu/

Publication series

NameLecture Notes in Computer Science
PublisherSpringer, Cham
Volume12361
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference16th European Conference on Computer Vision
Abbreviated titleECCV 2020
CityVirtual conference
Period23/08/2028/08/20
Internet address

Fingerprint

Dive into the research topics of 'Learning to Generate Novel Domains for Domain Generalization'. Together they form a unique fingerprint.

Cite this