Abstract / Description of output
Robust learning in expressive languages with realworld data continues to be a challenging task. Numerous conventional methods appeal to heuristics without any assurances of robustness. While probably approximately correct (PAC) Semantics offers strong guarantees, learning explicit representations is not tractable, even in propositional logic. However, recent work on so-called “implicit” learning has shown tremendous promise in terms of obtaining polynomial-time results for fragments of firstorder logic. In this work, we extend implicit learning in PAC-Semantics to handle noisy data in the form of intervals and threshold uncertainty in the language of linear arithmetic. We prove that our extended framework keeps the existing polynomialtime complexity guarantees. Furthermore, we provide the first empirical investigation of this hitherto purely theoretical framework. Using benchmark problems, we show that our implicit approach to learning optimal linear programming objective constraints significantly outperforms an explicit approach in practice.
Original language | English |
---|---|
Title of host publication | Proceedings of 30th International Joint Conference on Artificial Intelligence (IJCAI-21) |
Publisher | IJCAI Inc |
Pages | 1410-1417 |
Number of pages | 8 |
ISBN (Electronic) | 978-0-9992411-9-6 |
DOIs | |
Publication status | Published - 19 Aug 2021 |
Event | 30th International Joint Conference on Artificial Intelligence - Montreal, Canada Duration: 19 Aug 2021 → 26 Aug 2021 https://ijcai-21.org/ |
Conference
Conference | 30th International Joint Conference on Artificial Intelligence |
---|---|
Abbreviated title | IJCAI 2021 |
Country/Territory | Canada |
City | Montreal |
Period | 19/08/21 → 26/08/21 |
Internet address |
Keywords / Materials (for Non-textual outputs)
- Constraints and SAT
- Constraints and Data Mining
- Constraints and Machine Learning