Learning Programmatically Structured Representations with Perceptor Gradients

Svetlin Penkov, Subramanian Ramamoorthy

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

We present the perceptor gradients algorithm – a novel approach to learning symbolic representations based on the idea of decomposing an agent’s policy into i) a perceptor network extracting symbols from raw observation data and ii) a task encoding program which maps the input symbols to output actions. We show that the proposed algorithm is able to learn representations that can be directly fed into a Linear-Quadratic Regulator (LQR) or a general purpose A* planner. Our experimental results confirm that the perceptor gradients algorithm is able to efficiently learn transferable symbolic representations as well as generate new observations according to a semantically meaningful specification.
Original languageEnglish
Title of host publicationProc. International Conference on Learning Representations, 2019
Place of PublicationNew Orleans, Louisiana, USA
Number of pages16
Publication statusE-pub ahead of print - 9 May 2019
EventSeventh International Conference on Learning Representations - New Orleans, United States
Duration: 6 May 20199 May 2019


ConferenceSeventh International Conference on Learning Representations
Abbreviated titleICLR 2019
Country/TerritoryUnited States
CityNew Orleans
Internet address

Keywords / Materials (for Non-textual outputs)

  • representation learning
  • structured representations
  • symbols
  • programs


Dive into the research topics of 'Learning Programmatically Structured Representations with Perceptor Gradients'. Together they form a unique fingerprint.

Cite this