Edinburgh Research Explorer

Recurrent Deterministic Policy Gradient Method for Bipedal Locomotion on Rough Terrain Challenge

Research output: Chapter in Book/Report/Conference proceedingConference contribution

  • Doo Re Song
  • Chuanyu Yang
  • Christopher McGreavy
  • Zhibin Li

Related Edinburgh Organisations

Original languageEnglish
Title of host publicationProceedings of the 15th International Conference on Control, Automation, Robotics and Vision (ICARCV 2018)
Place of PublicationSingapore
Number of pages8
StateAccepted/In press - 1 Sep 2018
Event15th International Conference on Control, Automation, Robotics and Vision - , Singapore
Duration: 18 Nov 201821 Nov 2018
https://www.icarcv.net/

Conference

Conference15th International Conference on Control, Automation, Robotics and Vision
Abbreviated titleICARCV 2018
CountrySingapore
Period18/11/1821/11/18
Internet address

Abstract

This paper presents a deep learning framework that is capable of solving partially observable locomotion tasks based on our novel interpretation of Recurrent Deterministic Policy Gradient (RDPG). We study on bias of sampled error measure and its variance induced by the partial observability of environment and subtrajectory sampling, respectively. Three major improvements are introduced in our RDPG based learning framework: tail-step bootstrap of temporal difference, initialisation of hidden state using past subtrajectory, truncation of temporal backpropagation, and injection of external experiences learned by other agents. The proposed learning framework was implemented to solve the Bipedal-Walker challenge in OpenAI’s gym simulation environment where only partial state information is available. Our simulation study shows that the autonomous behaviors generated by the RDPG agent are highly adaptive to a variety of obstacles and enables the agent to effectively traverse rugged terrains for long distance with higher success rate than leading contenders.

Event

ID: 76210378