Edinburgh Research Explorer

Deep Multi-Class Segmentation Without Ground-Truth Labels

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Related Edinburgh Organisations

Open Access permissions

Open

Original languageEnglish
Title of host publicationMedical Imaging with Deep Learning
Subtitle of host publicationAmsterdam
Place of PublicationAmsterdam
Publication statusPublished - Jul 2018

Abstract

Abstract: In this paper we demonstrate that through the use of adversarial training and additional unsupervised costs it is possible to train a multi-class anatomical segmentation algorithm without any ground-truth labels for the data set to be segmented. Specifically, using labels from a different data set of the same anatomy (although potentially in a different modality) we train a model to synthesise realistic multi-channel label masks from input cardiac images in both CT and MRI, through adversarial learning. However, as is to be expected, generating realistic mask images is not, on its own, sufficient for the segmentation task: the model can use the input image as a source of noise and synthesise highly realistic segmentation masks that do no necessarily correspond spatially to the input. To overcome this, we introduce additional unsupervised costs, and demonstrate that these provide sufficient further guidance to produce good segmentation results. We test our proposed method on both CT and MR data from the multi-modal whole heart segmentation challenge (MM-WHS) [1], and show the effect of our unsupervised costs on improving the segmentation results, in comparison to a variant without them.

ID: 74802809