Abstract
Function learning (or regression) problems are ubiquitous in human experience and machine learning. Humans can generalise in diverse ways that respect
the abstract structure of a problem and can use knowledge in one context to
inform decisions in another. Knowledge transfer is common in applied statistics,
as when a practitioner recognises that kinds of regression problems involve certain parametric relationships. It is also at the heart of scientic progress, e.g.,
when analogies lead to new hypotheses and discoveries [5]. In some situations,
data are plentiful and transfer of knowledge is relatively unimportant, but when
data are sparse, having appropriate prior knowledge is essential.
In this work, we explore human-like generalisation in regression problems,
using psychological experiments and probabilistic models. Specifically:
{ We present evidence that humans can learn and generalise from relationships
in ways that reflect the compositional structure of those relationships.
{ These learned relationships are re-usable: they shape subsequent inferences
and lead to structured extrapolations in the face of extremely sparse data.
{ We describe a model that explains qualitative features of human judgements
in cases where previous models fail, and re-uses compositional representa-
tions to extrapolate from sparse data.
the abstract structure of a problem and can use knowledge in one context to
inform decisions in another. Knowledge transfer is common in applied statistics,
as when a practitioner recognises that kinds of regression problems involve certain parametric relationships. It is also at the heart of scientic progress, e.g.,
when analogies lead to new hypotheses and discoveries [5]. In some situations,
data are plentiful and transfer of knowledge is relatively unimportant, but when
data are sparse, having appropriate prior knowledge is essential.
In this work, we explore human-like generalisation in regression problems,
using psychological experiments and probabilistic models. Specifically:
{ We present evidence that humans can learn and generalise from relationships
in ways that reflect the compositional structure of those relationships.
{ These learned relationships are re-usable: they shape subsequent inferences
and lead to structured extrapolations in the face of extremely sparse data.
{ We describe a model that explains qualitative features of human judgements
in cases where previous models fail, and re-uses compositional representa-
tions to extrapolate from sparse data.
Original language | English |
---|---|
Number of pages | 3 |
Publication status | Accepted/In press - 10 Aug 2016 |
Event | Machine Intelligence 20: Human-like Computing - , United Kingdom Duration: 23 Oct 2016 → 25 Oct 2016 |
Conference
Conference | Machine Intelligence 20: Human-like Computing |
---|---|
Country/Territory | United Kingdom |
Period | 23/10/16 → 25/10/16 |