Edinburgh Research Explorer

Implicitly learning to reason in first-order logic

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Related Edinburgh Organisations

Open Access permissions

Open

Documents

https://papers.nips.cc/paper/8599-implicitly-learning-to-reason-in-first-order-logic
Original languageEnglish
Title of host publicationAdvances in Neural Information Processing Systems 32 (NeurIPS 2019)
PublisherCurran Associates Inc
Pages3376-3386
Number of pages11
Volume32
Publication statusPublished - 14 Dec 2019
EventThirty-third Conference on Neural Information Processing Systems - Vancouver, Canada
Duration: 8 Dec 201914 Dec 2019
https://neurips.cc/Conferences/2019

Conference

ConferenceThirty-third Conference on Neural Information Processing Systems
Abbreviated titleNeurIPS
CountryCanada
CityVancouver
Period8/12/1914/12/19
Internet address

Abstract

We consider the problem of answering queries about formulas of first-order logic based on background knowledge partially represented explicitly as other formulas, and partially represented as examples independently drawn from a fixed probability distribution. PAC semantics, introduced by Valiant, is one rigorous, general proposal for learning to reason in formal languages: although weaker than classical entailment, it allows for a powerful model theoretic framework for answering queries while requiring minimal assumptions about the form of the distribution in question. To date, however, the most significant limitation of that approach, and more generally most machine learning approaches with robustness guarantees, is that the logical language is ultimately essentially propositional, with finitely many atoms. Indeed, the theoretical findings on the learning of relational theories in such generality have been resoundingly negative. This is despite the fact that first-order logic is widely argued to be most appropriate for representing human knowledge. In this work, we present a new theoretical approach to robustly learning to reason in first-order logic, and consider universally quantified clauses over a countably infinite domain. Our results exploit symmetries exhibited by constants in the language, and generalize the notion of implicit learnability to show how queries can be computed against (implicitly) learned first-order background knowledge.

Event

Thirty-third Conference on Neural Information Processing Systems

8/12/1914/12/19

Vancouver, Canada

Event: Conference

ID: 115878713