When and Where to Transfer for Bayes Net Parameter Learning

Yun Zhou, Timothy Hospedales, Norman Fenton

Research output: Contribution to journalArticlepeer-review

Abstract

Learning Bayesian networks from scarce data is a major challenge in real-world applications where data are hard to acquire. Transfer learning techniques attempt to address this by leveraging data from different but related problems. For example, it may be possible to exploit medical diagnosis data from a different country. A challenge with this approach is heterogeneous relatedness to the target, both within and across source networks. In this paper we introduce the Bayesian network parameter transfer learning (BNPTL) algorithm to reason about both network and fragment (sub-graph) relatedness. BNPTL addresses (i) how to find the most relevant source network and network fragments to transfer, and (ii) how to fuse source and target parameters in a robust way. In addition to improving target task performance, explicit reasoning allows us to diagnose network and fragment relatedness across Bayesian networks, even if latent variables are present, or if their state space is heterogeneous. This is important in some applications where relatedness itself is an output of interest. Experimental results demonstrate the superiority of BNPTL at various scarcities and source relevance levels compared to single task learning and other state-of-the-art parameter transfer methods. Moreover, we demonstrate successful application to real-world medical case studies.
Original languageEnglish
Pages (from-to)361-373
Number of pages13
JournalExpert Systems with Applications
Volume55
DOIs
Publication statusPublished - 15 Aug 2016

Fingerprint

Dive into the research topics of 'When and Where to Transfer for Bayes Net Parameter Learning'. Together they form a unique fingerprint.

Cite this