Deep Inductive Logic Programming meets Reinforcement Learning

Vaishak Belle, Andreas Bueff

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

One approach to explaining the hierarchical levels of understanding within a machine learning model is the symbolic method of inductive logic programming (ILP), which is data efficient and capable of learning first-order logic rules that can entail data behaviour. A differentiable extension to ILP, socalled differentiable Neural Logic (dNL) networks, are able to learn Boolean functions as their neural architecture includes symbolic reasoning. We propose an application of dNL in the field of Relational Reinforcement Learning (RRL) to address dynamic continuous environments. This represents an extension of previous work in applying dNL-based ILP in RRL settings, as our proposed model updates the architecture to enable it to solve problems in continuous RL environments. The goal of this research is to improve upon current ILP methods for use in RRL by incorporating non-linear continuous predicates, allowing RRL agents to reason and make decisions in dynamic and continuous environments.
Original languageEnglish
Title of host publicationProceedings 39th International Conference on Logic Programming
PublisherOpen Publishing Association
Number of pages14
Publication statusPublished - 12 Sept 2023
EventThe 39th International Conference on Logic Programming - Imperial College London, London, United Kingdom
Duration: 9 Jul 202315 Jul 2023
Conference number: 39

Publication series

NameElectronic Proceedings in Theoretical Computer Science (EPTCS)
PublisherOpen Publishing Association
ISSN (Electronic)2075-2180


ConferenceThe 39th International Conference on Logic Programming
Abbreviated titleICLP 2023
Country/TerritoryUnited Kingdom
Internet address


Dive into the research topics of 'Deep Inductive Logic Programming meets Reinforcement Learning'. Together they form a unique fingerprint.

Cite this