Robust Person Re-identification by Modelling Feature Uncertainty

Tianyuan Yu, Da Li, Yongxin Yang, Timothy Hospedales, Tao Xiang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

We aim to learn deep person re-identification (ReID) models that are robust against noisy training data. Two types of noise are prevalent in practice: (1) label noise caused by human annotator errors and (2) data outliers caused by person detector errors or occlusion. Both types of noise pose serious problems for training ReID models, yet have been largely ignored so far. In this paper, we propose a novel deep network termed DistributionNet for robust ReID. Instead of representing each person image as a feature vector, DistributionNet models it as a Gaussian distribution with its variance representing the uncertainty of the extracted features. A carefully designed loss is formulated in DistributionNet to unevenly allocate uncertainty across training samples. Consequently, noisy samples are assigned large variance/uncertainty, which effectively alleviates their negative impacts on model fitting. Extensive experiments demonstrate that our model is more effective than alternative noise-robust deep models. The source code is available at:
Original languageEnglish
Title of host publication2019 IEEE/CVF International Conference on Computer Vision (ICCV)
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Number of pages10
ISBN (Electronic)978-1-7281-4803-8
ISBN (Print)978-1-7281-4804-5
Publication statusPublished - 27 Feb 2020
EventInternational Conference on Computer Vision 2019 - Seoul, Korea, Republic of
Duration: 27 Oct 20192 Nov 2019

Publication series

PublisherInstitute of Electrical and Electronics Engineers (IEEE)
ISSN (Print)1550-5499
ISSN (Electronic)2380-7504


ConferenceInternational Conference on Computer Vision 2019
Abbreviated titleICCV 2019
Country/TerritoryKorea, Republic of
Internet address


Dive into the research topics of 'Robust Person Re-identification by Modelling Feature Uncertainty'. Together they form a unique fingerprint.

Cite this