Multi-task Gaussian Process Prediction

Edwin V. Bonilla, Kian Ming A. Chai, Christopher K. I. Williams

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper we investigate multi-task learning in the context of Gaussian Processes (GP). We propose a model that learns a shared covariance function on input-dependent features and a “free-form” covariance matrix over tasks. This allows for good flexibility when modelling inter-task dependencies while avoiding the need for large amounts of data for training. We show that under the assumption of noise-free observations and a block design, predictions for a given task only depend on its target values and therefore a cancellation of inter-task transfer occurs. We evaluate the benefits of our model on two practical applications: a compiler performance prediction problem and an exam score prediction task. Additionally, we make use of GP approximations and properties of our model in order to provide scalability to large data sets.
Original languageEnglish
Title of host publicationAdvances in Neural Information Processing Systems 20
PublisherNIPS Foundation
Pages153-160
Number of pages8
Publication statusPublished - 2008

Fingerprint

Dive into the research topics of 'Multi-task Gaussian Process Prediction'. Together they form a unique fingerprint.

Cite this