TY - UNPB
T1 - Constrained Training of Neural Networks via Theorem Proving
AU - Chevallier, M.
AU - Whyte, M.
AU - Fleuriot, J.D.
PY - 2022/7/8
Y1 - 2022/7/8
N2 - We introduce a theorem proving approach to the specification and generation of temporal logical constraints for training neural networks. We formalise a deep embedding of linear temporal logic over finite traces (LTLf) and an associated evaluation function characterising its semantics within the higher-order logic of the Isabelle theorem prover. We then proceed to formalise a loss function L that we formally prove to be sound, and differentiable to a function dL. We subsequently use Isabelle's automatic code generation mechanism to produce OCaml versions of LTLf, L and dL that we integrate with PyTorch via OCaml bindings for Python. We show that, when used for training in an existing deep learning framework for dynamic movement, our approach produces expected results for common movement specification patterns such as obstacle avoidance and patrolling. The distinctive benefit of our approach is the fully rigorous method for constrained training, eliminating many of the risks inherent to ad-hoc implementations of logical aspects directly in an "unsafe" programming language such as Python.
AB - We introduce a theorem proving approach to the specification and generation of temporal logical constraints for training neural networks. We formalise a deep embedding of linear temporal logic over finite traces (LTLf) and an associated evaluation function characterising its semantics within the higher-order logic of the Isabelle theorem prover. We then proceed to formalise a loss function L that we formally prove to be sound, and differentiable to a function dL. We subsequently use Isabelle's automatic code generation mechanism to produce OCaml versions of LTLf, L and dL that we integrate with PyTorch via OCaml bindings for Python. We show that, when used for training in an existing deep learning framework for dynamic movement, our approach produces expected results for common movement specification patterns such as obstacle avoidance and patrolling. The distinctive benefit of our approach is the fully rigorous method for constrained training, eliminating many of the risks inherent to ad-hoc implementations of logical aspects directly in an "unsafe" programming language such as Python.
KW - Linear temporal logic
KW - neural networks
KW - theorem proving
KW - Isabelle/HOL
UR - http://www.scopus.com/inward/record.url?eid=2-s2.0-85134643371&partnerID=MN8TOARS
U2 - 10.48550/arXiv.2207.03880
DO - 10.48550/arXiv.2207.03880
M3 - Preprint
BT - Constrained Training of Neural Networks via Theorem Proving
PB - ArXiv
ER -