Action Sequencing Using Visual Permutations

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

Humans can easily reason about the sequence of high level actions needed to complete tasks, but it is particularly difficult to instill this ability in robots trained from relatively few examples.This work considers the task of neural action sequencing conditioned on a single reference visual state. This task is extremely challenging as it is not only subject to the significant combinatorial complexity that arises from large action sets, but also requires a model that can perform some form of symbol grounding, mapping high dimensional input data to actions, while reasoning about action relationships. This letter takes a permutation perspective and argues that action sequencing benefits from the ability to reason about both permutations and ordering concepts. Empirical analysis shows that neural models trained with latent permutations outperform standard neural architectures in constrained action sequencing tasks. Results also show that action sequencing using visual permutations is an effective mechanism to initialise and speed up traditional planning techniques and successfully scales to far greater action set sizes than models considered previously.
Original languageEnglish
Pages (from-to)1745-1752
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume6
Issue number2
Early online date16 Feb 2021
DOIs
Publication statusPublished - 1 Apr 2021

Keywords / Materials (for Non-textual outputs)

  • deep learning methods
  • learning from demostration
  • representation learning

Fingerprint

Dive into the research topics of 'Action Sequencing Using Visual Permutations'. Together they form a unique fingerprint.

Cite this