Active physical inference via reinforcement learning

Shuaiji Li, Yu Sun, Sijia Liu, Tianyu Wang, Todd Gureckis, Neil R Bramley

Research output: Chapter in Book/Report/Conference proceedingConference contribution


When encountering unfamiliar physical objects, children and adults often perform structured interrogatory actions such as grasping and prodding, so revealing latent physical properties such as masses and textures. However, the processes driving and supporting these curious behaviors are still largely mysterious. In this paper, we develop and train an agent able to actively uncover latent physical properties such as the mass and force of objects in a simulated physical “micro-world’. Concretely, we used a simulation-based-inference framework to quantify the physical information produced by observation and interaction with the evolving dynamic environment. We used model-free reinforcement learning algorithm to train an agent to implement general strategies for revealing latent physical properties. We compare the behaviors of this agent to the human behaviors observed in a similar task.
Original languageEnglish
Title of host publicationProceedings of the 41st Annual Meeting of the Cognitive Science Society
Place of PublicationMontreal
PublisherCognitive Science Society
Number of pages7
Publication statusPublished - 27 Jul 2019


  • physical simulation
  • active learning
  • probabilistic inference
  • reinforcement learning


Dive into the research topics of 'Active physical inference via reinforcement learning'. Together they form a unique fingerprint.

Cite this