OpenSceneVLAD: Appearance Invariant, Open Set Scene Classification

William H. B. Smith, Michael Milford, Klaus D. McDonald-Maier, Shoaib Ehsan, Robert B Fisher

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Scene classification is a well-established area of computer vision research that aims to classify a scene image into pre-defined categories such as playground, beach and airport. Recent work has focused on increasing the variety of pre-defined categories for classification, but so far failed to consider two major challenges: changes in scene appearance due to lighting and open set classification (the ability to classify unknown scene data as not belonging to the trained classes). Our first contribution, SceneVLAD, fuses scene classification and visual place recognition CNNs for appearance invariant scene classification that outperforms state-of-the-art scene classification by a mean F1 score of up to 0.1. Our second contribution, OpenSceneVLAD, extends the first to an open set classification scenario using intra-class splitting to achieve a mean increase in F1 scores of up to 0.06 compared to using state-of-the-art openmax layer. We achieve these results on three scene class datasets extracted from large scale outdoor visual localisation datasets, one of which we collected ourselves.
Original languageEnglish
Title of host publicationProceedings of the International Conference on Robotics and Automation (ICRA 2022)
Number of pages7
ISBN (Electronic)978-1-7281-9681-7
ISBN (Print)978-1-7281-9682-4
Publication statusPublished - 12 Jul 2022
Event2022 IEEE International Conference on Robotics and Automation - Philadelphia , United States
Duration: 23 May 202227 May 2022


Conference2022 IEEE International Conference on Robotics and Automation
Abbreviated titleICRA 2022
Country/TerritoryUnited States
Internet address


Dive into the research topics of 'OpenSceneVLAD: Appearance Invariant, Open Set Scene Classification'. Together they form a unique fingerprint.

Cite this