Improving Object Detector Algorithms using Uncertainty and Reliability

Calum Blair, John Thompson, Neil Robertson

Research output: Contribution to journalArticle

Abstract

Object detection and classication algorithms are normally evaluated
on the basis of accuracy; how many misclassications does each produce
on a large dataset? The classication condence scores generated by al-
gorithms are somewhat arbitrary and dicult to compare or depend on.
This reduces trust by end users or later-stage decision making or rea-
soning algorithms which must decide which detections are likely to be
incorrect and allocate resources on that basis. Here we assess classier
reliability and uncertainty (entropy) in three scenarios: object detection
and categorisation in visual, synthetic aperture sonar and radar (SAS,
SAR) imagery. Techniques for obtaining probabilistic classications from
score-based decision algorithms such as support vector machines (SVMs)
are compared to classiers which produce probabilistic results as stan-
dard (Gaussian Process Classiers). Adaboost-based classiers are shown
to be both accurate and reliable for the vision modality. In the SAS and
SAR cases where these methods perform poorly, SVM-based classiers
outperform other options including GPCs.
Original languageEnglish
Number of pages29
JournalImage and vision computing
Publication statusUnpublished - 10 Jun 2017

Fingerprint

Dive into the research topics of 'Improving Object Detector Algorithms using Uncertainty and Reliability'. Together they form a unique fingerprint.

Cite this