TY - GEN
T1 - Semantic objective functions
T2 - 17th International Conference on Agents and Artificial Intelligence
AU - Mendez-Lucero, Miguel Angel
AU - Gallardo, Enrique Bojorquez
AU - Belle, Vaishak
N1 - Conference code: 17
PY - 2025/2/25
Y1 - 2025/2/25
N2 - Issues of safety, explainability, and efficiency are of increasing concern in learning systems deployed with hard and soft constraints. Loss-function based techniques have shown promising results in this area, by embedding logical constraints during neural network training. Through an integration of logic and information geometry, we provide a construction and theoretical framework for these tasks that generalize many approaches. We propose a loss-based method that embeds knowledge—enforces logical constraints—into a machine learning model that outputs probability distributions. This is done by constructing a distribution from the logical formula, and constructing a loss function as a linear combination of the original loss function with the Fisher-Rao distance or Kullback-Leibler divergence to the constraint distribution. This construction is primarily for logical constraints in the form of propositional formulas (Boolean variables), but can be extended to formulas of a first-order language with finite variables over a model with compact domain (categorical and continuous variables), and others statistical models that is to be trained with semantic information. We evaluate our method on a variety of learning tasks, including classification tasks with logic constraints, transferring knowledge from logic formulas, and knowledge distillation.
AB - Issues of safety, explainability, and efficiency are of increasing concern in learning systems deployed with hard and soft constraints. Loss-function based techniques have shown promising results in this area, by embedding logical constraints during neural network training. Through an integration of logic and information geometry, we provide a construction and theoretical framework for these tasks that generalize many approaches. We propose a loss-based method that embeds knowledge—enforces logical constraints—into a machine learning model that outputs probability distributions. This is done by constructing a distribution from the logical formula, and constructing a loss function as a linear combination of the original loss function with the Fisher-Rao distance or Kullback-Leibler divergence to the constraint distribution. This construction is primarily for logical constraints in the form of propositional formulas (Boolean variables), but can be extended to formulas of a first-order language with finite variables over a model with compact domain (categorical and continuous variables), and others statistical models that is to be trained with semantic information. We evaluate our method on a variety of learning tasks, including classification tasks with logic constraints, transferring knowledge from logic formulas, and knowledge distillation.
KW - semantic objective functions
KW - probability distributions
KW - logic and deep learning
KW - semantic regularization
KW - knowledge distillation
KW - constraint learning
KW - applied information geometry
U2 - 10.5220/0013229200003890
DO - 10.5220/0013229200003890
M3 - Conference contribution
VL - 3
T3 - ICAART
SP - 909
EP - 917
BT - Proceedings of the 17th International Conference on Agents and Artificial Intelligence - Volume 3
A2 - Rocha, Ana Paula
A2 - Steels, Luc
A2 - van den Herik, H. Jaap
PB - SCITEPRESS
Y2 - 23 February 2025 through 25 February 2025
ER -