Learning Autonomous Mobility Using Real Demonstration Data

Jiacheng Gu, Zhibin Li

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

This work proposed an efficient learning-based framework to learn feedback control policies from human teleoperated demonstrations, which achieved obstacle negotiation, staircase traversal, slipping control and parcel delivery for a tracked robot. Due to uncertainties in real-world scenarios, e.g. obstacle and slippage, closed-loop feedback control plays an important role in improving robustness and resilience, but the control laws are difficult to program manually for achieving autonomous behaviours. We formulated an architecture based on a long-short-term-memory (LSTM) neural network, which effectively learn reactive control policies from human demonstrations. Using data-sets from a few real demonstrations, our algorithm can directly learn successful policies, including obstacle-negotiation, stair-climbing and delivery, fall recovery and corrective control of slippage. We proposed decomposition of complex robot actions to reduce the difficulty of learning the long-term dependencies. Furthermore, we proposed a method to efficiently handle non-optimal demos and to learn new skills, since collecting enough demonstration can be time-consuming and sometimes very difficult on a real robotic system.
Original languageEnglish
Title of host publication2021 20th International Conference on Advanced Robotics (ICAR)
Number of pages7
ISBN (Electronic)978-1-6654-3684-7
ISBN (Print)978-1-6654-3685-4
Publication statusPublished - 5 Jan 2022
Event20th International Conference on Advanced Robotics - Ljubljana, Slovenia
Duration: 7 Dec 202110 Dec 2021


Conference20th International Conference on Advanced Robotics
Abbreviated titleICAR 2021
Internet address


Dive into the research topics of 'Learning Autonomous Mobility Using Real Demonstration Data'. Together they form a unique fingerprint.

Cite this