Image modeling with position-encoding dynamic trees

AJ Storkey*, CKI Williams

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

This paper describes the Position-Encoding Dynamic Tree (PEDT). The PEDT is a probabilistic model for images that improves on the dynamic tree by allowing the positions of objects to play a part in the model. This increases the flexibility of the model over the dynamic tree and allows the positions of objects to be located and manipulated. This paper motivates and defines this form of probabilistic model using the belief network formalism. A structured variational approach for inference and learning in the PEDT is developed, and the resulting variational updates are obtained, along with additional implementation considerations that ensure the computational cost scales linearly in the number of nodes of the belief network. The PEDT model is demonstrated and compared with the dynamic tree and fixed tree. The structured variational learning method is compared with mean field approaches.

Original languageEnglish
Pages (from-to)859-871
Number of pages13
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Issue number7
Publication statusPublished - Jul 2003

Keywords / Materials (for Non-textual outputs)

  • dynamic trees
  • variational inference
  • belief networks
  • Bayesian networks
  • image segmentation
  • structured image models
  • tree structured networks


Dive into the research topics of 'Image modeling with position-encoding dynamic trees'. Together they form a unique fingerprint.

Cite this