Learning Adaptive Grasping From Human Demonstrations

Shuaijun Wang, Wenbin Hu, Lining Sun, Xin Wang, Zhibin Li

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

This work studied a learning-based approach to learn grasping policies from teleoperated human demonstrations which can achieve adaptive grasping using three different neural network (NN) structures. To transfer human grasping skills effectively, we used multi-sensing state within a sliding time window to learn the state-action mapping. By teleoperating an anthropomorphic robotic hand using human hand tracking, we collected training datasets from representative grasping of various objects, which were used to train grasping policies with three proposed NN structures. The learned policies can grasp objects with varying sizes, shapes, and stiffness. We benchmarked the grasping performance of all policies, and experimental validations showed significant advantages of using the sequential history states, compared to the instantaneous feedback. Based on the benchmark, we further validated the best NN structure to conduct extensive experiments of grasping hundreds of unseen objects with adaptive motions and grasping forces.
Original languageEnglish
Pages (from-to)3865-3873
Number of pages9
JournalIEEE/ASME Transactions on Mechatronics
Volume27
Issue number5
Early online date16 Feb 2022
DOIs
Publication statusPublished - 1 Oct 2022

Keywords / Materials (for Non-textual outputs)

  • Adaptive grasping
  • robot learning
  • teleoperation
  • human demonstrations
  • Training
  • Actuators
  • Artificial neural networks
  • Grasping
  • Robot sensing systems
  • History
  • Robots

Fingerprint

Dive into the research topics of 'Learning Adaptive Grasping From Human Demonstrations'. Together they form a unique fingerprint.

Cite this