RIDM: Reinforced Inverse Dynamics Modeling for Learning from a Single Observed Demonstration

Brahma S. Pavse, Faraz Torabi, Josiah Hanna, Garrett Warnell, Peter Stone

Research output: Contribution to conferencePosterpeer-review

Abstract / Description of output

Imitation learning has long been an approach to alleviate the tractability issues that arise in reinforcement learning. However, most literature makes several assumptions such as access to the expert's actions, availability of many expert demonstrations, and injection of task-specific domain knowledge into the learning process. We propose reinforced inverse dynamics modeling (RIDM), a method of combining reinforcement learning and imitation from observation (IfO) to perform imitation using a single expert demonstration, with no access to the expert's actions, and with little task-specific domain knowledge. Given only a single set of the expert's raw states, such as joint angles in a robot control task, at each time-step, we learn an inverse dynamics model to produce the necessary low-level actions, such as torques, to transition from one state to the next such that the reward from the environment is maximized. We demonstrate that RIDM outperforms other techniques when we apply the same constraints on the other methods on six domains of the MuJoCo simulator and for two different robot soccer tasks for two experts from the RoboCup 3D simulation league on the SimSpark simulator.
Original languageEnglish
Number of pages14
Publication statusPublished - 15 Jun 2019
EventICML 2019: Imitation, Intent and Interaction Workshop - Long Beach Convention Center, Long Beach, United States
Duration: 15 Jun 201915 Jun 2019


WorkshopICML 2019: Imitation, Intent and Interaction Workshop
Abbreviated titleI3 2019
Country/TerritoryUnited States
CityLong Beach
Internet address


Dive into the research topics of 'RIDM: Reinforced Inverse Dynamics Modeling for Learning from a Single Observed Demonstration'. Together they form a unique fingerprint.

Cite this