Object-Centric Representation Learning with Generative Spatial-Temporal Factorization

Nanbo Li, Muhammad Ahmed Raza, Wenbin Hu, Zhaole Sun, Robert B Fisher

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Learning object-centric scene representations is essential for attaining structural understanding and abstraction of complex scenes. Yet, as current approaches for unsupervised object-centric representation learning are built upon either a stationary observer assumption or a static scene assumption, they often: i) suffer single-view spatial ambiguities, or ii) infer incorrectly or inaccurately object representations from dynamic scenes. To address this, we propose Dynamics-aware Multi-Object Network (DyMON), a method that broadens the scope of multi-view object-centric representation learning to dynamic scenes. We train DyMON on multi-view-dynamic-scene data and show that DyMON learns—without supervision—to factorize the entangled effects of observer motions and scene object dynamics from a sequence of observations, and constructs scene object spatial representations suitable for rendering at arbitrary times (querying across time) and from arbitrary viewpoints (querying across space). We also show that the factorized scene representations (w.r.t. objects) support querying about a single object by space and time independently.
Original languageEnglish
Title of host publicationProceedings of the Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS 2021)
PublisherNeural Information Processing Systems
Number of pages26
Publication statusPublished - 6 Dec 2021
Event35th Conference on Neural Information Processing Systems - Virtual
Duration: 6 Dec 202114 Dec 2021

Publication series

NameAdvances in Neural Information Processing Systems
ISSN (Print)1049-5258


Conference35th Conference on Neural Information Processing Systems
Abbreviated titleNeurIPS 2021
Internet address


Dive into the research topics of 'Object-Centric Representation Learning with Generative Spatial-Temporal Factorization'. Together they form a unique fingerprint.

Cite this