Abstract / Description of output
We consider the problem of direct policy learning in situations where the policies are only observable through their projections into the null-space of a set of dynamic, non-linear task constraints. We tackle the issue of deriving consistent data for the learning of such policies and make two contributions towards its solution. Firstly, we derive the conditions required to exactly reconstruct null-space policies and suggest a learning strategy based on this derivation. Secondly, we consider the case that the null-space policy is conservative and show that such a policy can be learnt more easily and robustly by learning the underlying potential function and using this as our representation of the policy.
Original language | English |
---|---|
Title of host publication | Workshop on Robotics and Mathematics (ROBOMAT '07), Coimbra, Portugal |
Number of pages | 6 |
Publication status | Published - 2007 |