Compositionally Equivariant Representation Learning

Xiao Liu, Pedro Sanchez, Spyridon Thermos, Alison Q. O'Neil, Sotirios A. Tsaftaris

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

Deep learning models often need sufficient supervision (i.e. labelled data) in order to be trained effectively. By contrast, humans can swiftly learn to identify important anatomy in medical images like MRI and CT scans, with minimal guidance. This recognition capability easily generalises to new images from different medical facilities and to new tasks in different settings. This rapid and generalisable learning ability is largely due to the compositional structure of image patterns in the human brain, which are not well represented in current medical models. In this paper, we study the utilisation of compositionality in learning more interpretable and generalisable representations for medical image segmentation. Overall, we propose that the underlying generative factors that are used to generate the medical images satisfy compositional equivariance property, where each factor is compositional (e.g. corresponds to the structures in human anatomy) and also equivariant to the task. Hence, a good representation that approximates well the ground truth factor has to be compositionally equivariant. By modelling the compositional representations with learnable von-Mises-Fisher (vMF) kernels, we explore how different design and learning biases can be used to enforce the representations to be more compositionally equivariant under un-, weakly-, and semi-supervised settings. Extensive results show that our methods achieve the best performance over several strong baselines on the task of semi-supervised domain-generalised medical image segmentation. Code will be made publicly available upon acceptance at https://github.com/vios-s.
Original languageEnglish
Pages (from-to)2169-2179
JournalIEEE Transactions on Medical Imaging
Volume43
Issue number6
Early online date26 Jan 2024
DOIs
Publication statusPublished - 1 Jun 2024

Keywords / Materials (for Non-textual outputs)

  • Compositional equivariance
  • Compositionality
  • Data models
  • Domain generalisation
  • Heart
  • Image segmentation
  • Kernel
  • Medical diagnostic imaging
  • Representation learning
  • Semi-supervised
  • Task analysis
  • Training
  • Weakly supervised

Fingerprint

Dive into the research topics of 'Compositionally Equivariant Representation Learning'. Together they form a unique fingerprint.
  • vMFNet: Compositionality Meets Domain-generalised Segmentation

    Liu, X., Thermos, S., Sanchez, P., O’Neil, A. & Tsaftaris, S. A., 17 Sept 2022, (E-pub ahead of print) Medical Image Computing and Computer Assisted Intervention – MICCAI 2022 : 25th International Conference, Singapore, September 18–22, 2022, Proceedings, Part VII. Springer, Vol. 13437. p. 704–714 (Lecture Notes in Computer Science; vol. 13437).

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Open Access
    File

Cite this