Toward out-of-distribution generalization through inductive biases

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

State-of-the-art Machine Learning systems are able to process and analyze a large amount of data but they still struggle to generalize to out-of-distribution scenarios. To use Judea Pearl’s words, “Data are profoundly dumb” (Pearl & Mackenzie 2018); possessing a model of the world, a representation through which to frame reality is a necessary requirement in order to discriminate between relevant and irrelevant information and to deal with unknown scenarios. The aim of this paper is to address the crucial challenge of out-of-distribution generalization in automated systems by developing an understanding of how human agents build models to act in a dynamic environment. The steps needed to reach this goal are described by Pearl through the metaphor of the Ladder of Causation. In this paper, I support the relevance of inductive biases in order for an agent to reach the second rung on the Ladder: that of actively interacting with the environment.
Original languageEnglish
Title of host publicationPhilosophy and Theory of Artificial Intelligence 2021
EditorsVincent C. Müller
PublisherSpringer
Pages57-66
Number of pages10
ISBN (Print)9783031091520
DOIs
Publication statusPublished - 15 Nov 2022

Publication series

NameStudies in Applied Philosophy, Epistemology and Rational Ethics
PublisherSpringer
ISSN (Print)2192-6255
ISSN (Electronic)2192-6263

Keywords / Materials (for Non-textual outputs)

  • inductive biases
  • generalisation
  • decision making
  • causality
  • hybrid AI

Fingerprint

Dive into the research topics of 'Toward out-of-distribution generalization through inductive biases'. Together they form a unique fingerprint.

Cite this