Parallel Neurosymbolic Integration with Concordia

Jonathan Feldstein, Modestas Jurcius, Efthymia Tsamoura

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Parallel neurosymbolic architectures have been applied effectively in NLP by distilling knowledge from a logic theory into a deep model.However, prior art faces several limitations including supporting restricted forms of logic theories and relying on the assumption of independence between the logic and the deep network. We present Concordia, a framework overcoming the limitations of prior art. Concordia is agnostic both to the deep network and the logic theory offering support for a wide range of probabilistic theories. Our framework can support supervised training of both components and unsupervised training of the neural component. Concordia has been successfully applied to tasks beyond NLP and data classification, improving the accuracy of state-of-the-art on collective activity detection, entity linking and recommendation tasks.
Original languageEnglish
Title of host publicationProceedings of the 40 th International Conference on Machine Learning
PublisherPMLR
Pages9870-9885
Number of pages16
Volume202
Publication statusPublished - 10 Jul 2023
EventThe Fortieth International Conference on Machine Learning - Honolulu, United States
Duration: 23 Jul 202329 Jul 2023
Conference number: 40
https://icml.cc/

Publication series

NameProceedings of Machine Learning Research
PublisherPMLR
ISSN (Electronic)2640-3498

Conference

ConferenceThe Fortieth International Conference on Machine Learning
Abbreviated titleICML 2023
Country/TerritoryUnited States
CityHonolulu
Period23/07/2329/07/23
Internet address

Fingerprint

Dive into the research topics of 'Parallel Neurosymbolic Integration with Concordia'. Together they form a unique fingerprint.

Cite this