Multi-expert learning of adaptive legged locomotion

Chuanyu Yang, Kai Yuan, Qiuguo Zhu, Wanming Yu, Zhibin Li

Research output: Contribution to journalArticlepeer-review


Achieving versatile robot locomotion requires motor skills that can adapt to previously unseen situations. We propose a multi-expert learning architecture (MELA) that learns to generate adaptive skills from a group of representative expert skills. During training, MELA is first initialized by a distinct set of pretrained experts, each in a separate deep neural network (DNN). Then, by learning the combination of these DNNs using a gating neural network (GNN), MELA can acquire more specialized experts and transitional skills across various locomotion modes. During runtime, MELA constantly blends multiple DNNs and dynamically synthesizes a new DNN to produce adaptive behaviors in response to changing situations. This approach leverages the advantages of trained expert skills and the fast online synthesis of adaptive policies to generate responsive motor skills during the changing tasks. Using one unified MELA framework, we demonstrated successful multiskill locomotion on a real quadruped robot that performed coherent trotting, steering, and fall recovery autonomously and showed the merit of multi-expert learning generating behaviors that can adapt to unseen scenarios.
Original languageEnglish
Article numbereabb2174
Number of pages14
JournalScience Robotics
Issue number49
Publication statusPublished - 9 Dec 2020

Fingerprint Dive into the research topics of 'Multi-expert learning of adaptive legged locomotion'. Together they form a unique fingerprint.

Cite this