Root Gap Correction with a Deep Inpainting Model

Hao Chen, Mario Giuffrida, Peter Doerner, Sotirios Tsaftaris

Research output: Contribution to conferencePaperpeer-review

Abstract / Description of output

Imaging roots of growing plants in a non-invasive and affordable fashion has been a long-standing problem in image-assisted plant breeding and phenotyping. One of the most affordable and diffuse approaches is the use of mesocosms, where plants are grown in soil against a glass surface that permits the roots visualization and imaging. However, due to soil and the fact that the plant root is a 2D projection of a 3D object, parts of the root are occluded. As a result, even under perfect root segmentation, the resulting images contain several gaps that may hinder the extraction of finely grained root system architecture traits.

We propose an effective deep neural network to recover gaps from disconnected root segments. We train a fully supervised encoder-decoder deep CNN that, given an image containing gaps as input, generates an inpainted version, recovering the missing parts. Since in real data ground-truth is lacking, we use synthetic root images that we artificially perturb by introducing gaps to train and evaluate our approach. We show that our network can work both in dicot and monocot cases in reducing root gaps. We also show promising exemplary results in real data from chickpea root architectures.
Original languageEnglish
Publication statusPublished - 6 Sept 2018
Event29th British Machine Vision Conference (BMVC) - Northumbria University, Newcastle upon Tyne, United Kingdom
Duration: 3 Sept 20186 Sept 2018


Conference29th British Machine Vision Conference (BMVC)
Abbreviated titleBMVC 2018
Country/TerritoryUnited Kingdom
CityNewcastle upon Tyne
Internet address


Dive into the research topics of 'Root Gap Correction with a Deep Inpainting Model'. Together they form a unique fingerprint.

Cite this