Context-Based Object Recognition: Indoor Versus Outdoor Environments

Ali Alameer, Patrick Degenaar, Kianoush Nazarpour

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Object recognition is a challenging problem in high-level vision. Models that perform well for the outdoor domain, perform poorly in the indoor domain and the reverse is also true. This is due to the dramatic discrepancies of the global properties of each environment, for instance, backgrounds and lighting conditions. Here, we show that inferring the environment before or during the recognition process can dramatically enhance the recognition performance. We used a combination of deep and shallow models for object and scene recognition, respectively. Also, we used three novel topologies that can provide a trade-off between classification accuracy and decision sensitivity. We achieved a classification accuracy of 97.91%, outperforming the performance of a single GoogLeNet by 13%. In another experiment, we achieved an accuracy of 95% to categorise indoor and outdoor scenes by inference.
Original languageEnglish
Title of host publicationAdvances in Computer Vision
EditorsKohei Arai, Supriya Kapoor
Place of PublicationCham
PublisherSpringer International Publishing AG
Pages473-490
Number of pages18
ISBN (Electronic)978-3-030-17798-0
ISBN (Print)978-3-030-17797-3
DOIs
Publication statusPublished - 24 Apr 2019
EventComputer Vision Conference 2019 - Las Vegas, United States
Duration: 25 Apr 201926 Apr 2019
https://saiconference.com/Conferences/CVC2019

Publication series

NameAdvances in Intelligent Systems and Computing
PublisherSpringer, Cham
Volume944
ISSN (Print)2194-5357
ISSN (Electronic)2194-5365

Conference

ConferenceComputer Vision Conference 2019
Abbreviated titleCVC 2019
Country/TerritoryUnited States
CityLas Vegas
Period25/04/1926/04/19
Internet address

Fingerprint

Dive into the research topics of 'Context-Based Object Recognition: Indoor Versus Outdoor Environments'. Together they form a unique fingerprint.

Cite this