Learning explanatory logical rules in non-linear domains: a neuro-symbolic approach

Andreas Bueff*, Vaishak Belle

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

Deep neural networks, despite their capabilities, are constrained by the need for large-scale training data, and often fall short in generalisation and interpretability. Inductive logic programming (ILP) presents an intriguing solution with its data-efficient learning of first-order logic rules. However, ILP grapples with challenges, notably the handling of non-linearity in continuous domains. With the ascent of neuro-symbolic ILP, there’s a drive to mitigate these challenges, synergising deep learning with relational ILP models to enhance interpretability and create logical decision boundaries. In this research, we introduce a neuro-symbolic ILP framework, grounded on differentiable Neural Logic networks, tailored for non-linear rule extraction in mixed discrete-continuous spaces. Our methodology consists of a neuro-symbolic approach, emphasising the extraction of non-linear functions from mixed domain data. Our preliminary findings showcase our architecture’s capability to identify non-linear functions from continuous data, offering a new perspective in neural-symbolic research and underlining the adaptability of ILP-based frameworks for regression challenges in continuous scenarios.
Original languageEnglish
Number of pages34
JournalMachine Learning
Early online date8 Apr 2024
Publication statusE-pub ahead of print - 8 Apr 2024


Dive into the research topics of 'Learning explanatory logical rules in non-linear domains: a neuro-symbolic approach'. Together they form a unique fingerprint.

Cite this