Superspace extrapolation reveals inductive biases in function learning

Christopher G. Lucas, Douglas Sterling, Charles Kemp

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We introduce a new approach for exploring how humans learn and represent functional relationships based on limited observations. We focus on a problem called superspace extrapolation, where learners observe training examples drawn from an n-dimensional space and must extrapolate to an n+1-dimensional superspace of the training examples. Many existing psychological models predict that superspace extrapolation should be fundamentally underdetermined, but we show that humans are able to extrapolate both linear and non-linear functions under these conditions. We also show that a Bayesian model can account for our results given a hypothesis space that includes families of simple functional relationships.
Original languageEnglish
Title of host publicationProceedings of the 34th Annual Meeting of the Cognitive Science Society, CogSci 2012, Sapporo, Japan, August 1-4, 2012
Pages713-718
Number of pages6
Publication statusPublished - 2012

Fingerprint Dive into the research topics of 'Superspace extrapolation reveals inductive biases in function learning'. Together they form a unique fingerprint.

Cite this