Learning natural locomotion behaviors for humanoid robots using human bias

Chuanyu Yang, Kai Yuan, Shuai Heng, Taku Komura, Zhibin Li

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

This letter presents a new learning framework that leverages the knowledge from imitation learning, deep reinforcement learning, and control theories to achieve human-style locomotion that is natural, dynamic, and robust for humanoids. We proposed novel approaches to introduce human bias, i.e. motion capture data and a special Multi-Expert network structure. We used the Multi-Expert network structure to smoothly blend behavioral features, and used the augmented reward design for the task and imitation rewards. Our reward design is composable, tunable, and explainable by using fundamental concepts from conventional humanoid control. We rigorously validated and benchmarked the learning framework which consistently produced robust locomotion behaviors in various test scenarios. Further, we demonstrated the capability of learning robust and versatile policies in the presence of disturbances, such as terrain irregularities and external pushes.
Original languageEnglish
Pages (from-to)2610-2617
Number of pages8
JournalIEEE Robotics and Automation Letters
Issue number2
Early online date10 Feb 2020
Publication statusPublished - 30 Apr 2020

Keywords / Materials (for Non-textual outputs)

  • Deep learning in robotics and automation
  • humanoid and bipedal locomotion
  • learning from demonstration


Dive into the research topics of 'Learning natural locomotion behaviors for humanoid robots using human bias'. Together they form a unique fingerprint.

Cite this