Human-like Function Learning and Transfer

Pablo Leon Villagra, Christopher Lucas

Research output: Contribution to conferenceAbstractpeer-review

Abstract

Function learning (or regression) problems are ubiquitous in human experience and machine learning. Humans can generalise in diverse ways that respect
the abstract structure of a problem and can use knowledge in one context to
inform decisions in another. Knowledge transfer is common in applied statistics,
as when a practitioner recognises that kinds of regression problems involve certain parametric relationships. It is also at the heart of scientic progress, e.g.,
when analogies lead to new hypotheses and discoveries [5]. In some situations,
data are plentiful and transfer of knowledge is relatively unimportant, but when
data are sparse, having appropriate prior knowledge is essential.
In this work, we explore human-like generalisation in regression problems,
using psychological experiments and probabilistic models. Specifically:
{ We present evidence that humans can learn and generalise from relationships
in ways that reflect the compositional structure of those relationships.
{ These learned relationships are re-usable: they shape subsequent inferences
and lead to structured extrapolations in the face of extremely sparse data.
{ We describe a model that explains qualitative features of human judgements
in cases where previous models fail, and re-uses compositional representa-
tions to extrapolate from sparse data.
Original languageEnglish
Number of pages3
Publication statusAccepted/In press - 10 Aug 2016
EventMachine Intelligence 20: Human-like Computing - , United Kingdom
Duration: 23 Oct 201625 Oct 2016

Conference

ConferenceMachine Intelligence 20: Human-like Computing
Country/TerritoryUnited Kingdom
Period23/10/1625/10/16

Fingerprint

Dive into the research topics of 'Human-like Function Learning and Transfer'. Together they form a unique fingerprint.

Cite this