Imitation learning has long been an approach to alleviate the tractability issues that arise in reinforcement learning. However, most literature makes several assumptions such as access to the expert's actions, availability of many expert demonstrations, and injection of task-specific domain knowledge into the learning process. We propose reinforced inverse dynamics modeling (RIDM), a method of combining reinforcement learning and imitation from observation (IfO) to perform imitation using a single expert demonstration, with no access to the expert's actions, and with little task-specific domain knowledge. Given only a single set of the expert's raw states, such as joint angles in a robot control task, at each time-step, we learn an inverse dynamics model to produce the necessary low-level actions, such as torques, to transition from one state to the next such that the reward from the environment is maximized. We demonstrate that RIDM outperforms other techniques when we apply the same constraints on the other methods on six domains of the MuJoCo simulator and for two different robot soccer tasks for two experts from the RoboCup 3D simulation league on the SimSpark simulator.
|Number of pages||14|
|Publication status||Published - 15 Jun 2019|
|Event||ICML 2019: Imitation, Intent and Interaction Workshop - Long Beach Convention Center, Long Beach, United States|
Duration: 15 Jun 2019 → 15 Jun 2019
|Workshop||ICML 2019: Imitation, Intent and Interaction Workshop|
|Abbreviated title||I3 2019|
|Period||15/06/19 → 15/06/19|