Vid2Param: Modelling of Dynamics Parameters from Video

Martin Asenov, Michael Burke, Daniel Angelov, Todor Davchev, Kartic Subr, Ram Ramamoorthy

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

Videos provide a rich source of information, but it is generally hard to extract dynamical parameters of interest. Inferring those parameters from a video stream would be beneficial for physical reasoning. Robots performing tasks in dynamic environments would benefit greatly from understanding the underlying environment motion, in order to make future predictions and to synthesize effective control policies that use this inductive bias. Online physical reasoning is therefore a fundamental requirement for robust autonomous agents. When the dynamics involves multiple modes (due to contacts or interactions between objects) and sensing must proceed directly from a rich sensory stream such as video, then traditional methods for system identification may not be well suited. We propose an approach where in fast parameter estimation can be achieved directly from video. We integrate a physically based dynamics model with a recurrent variational autoencoder, by introducing an additional loss to enforce desired constraints. The model, which we call Vid2Param, can be trained entirely in simulation, in an end-to-end manner with domain randomization, to perform online system identification, and make probabilistic forward predictions of parameters of interest. This enables the resulting model to encode parameters such as position, velocity, restitution, air drag and other physical properties of the system. We illustrate the utility of this in physical experiments wherein a PR2 robot with a velocity constrained arm must intercept an unknown bouncing ball with partly occluded vision, by estimating the physical parameters of this ball directly from the video trace after the ball is released.
Original languageEnglish
Pages (from-to)414-421
Number of pages8
JournalIEEE Robotics and Automation Letters
Issue number2
Early online date12 Dec 2019
Publication statusPublished - 30 Apr 2020

Keywords / Materials (for Non-textual outputs)

  • Visual Learning
  • Sensor-based Control
  • Motion and Path Planning


Dive into the research topics of 'Vid2Param: Modelling of Dynamics Parameters from Video'. Together they form a unique fingerprint.

Cite this