Edinburgh Research Explorer

COCO-Stuff: Thing and Stuff Classes in Context

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Related Edinburgh Organisations

Open Access permissions

Open

Documents

https://ieeexplore.ieee.org/document/8578230
Original languageEnglish
Title of host publicationThe IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Number of pages10
ISBN (Electronic)978-1-5386-6420-9
DOIs
Publication statusPublished - 17 Dec 2018
EventComputer Vision and Pattern Recognition 2018 - Salt Lake City, United States
Duration: 18 Jun 201822 Jun 2018
http://cvpr2018.thecvf.com/
http://cvpr2018.thecvf.com/
http://cvpr2018.thecvf.com/

Publication series

Name
ISSN (Electronic)2575-7075

Conference

ConferenceComputer Vision and Pattern Recognition 2018
Abbreviated titleCVPR 2018
CountryUnited States
CitySalt Lake City
Period18/06/1822/06/18
Internet address

Abstract

Semantic classes can be either things (objects with a well-defined shape, e.g. car, person) or stuff (amorphous background regions, e.g. grass, sky). While lots of classification and detection works focus on thing classes, less attention has been given to stuff classes. Nonetheless, stuff classes are important as they allow to explain important aspects of an image, including (1) scene type; (2) which thing classes are likely to be present and their location (through contextual reasoning); (3) physical attributes, material types and geometric properties of the scene. To understand stuff and things in context we introduce COCOStuff1, which augments all 164K images of the COCO 2017 dataset with pixel-wise annotations for 91 stuff classes. We introduce an efficient stuff annotation protocol based on superpixels, which leverages the original thing annotations. We quantify the speed versus quality trade-off of our protocol and explore the relation between annotation time and boundary complexity. Furthermore, we use COCO-Stuff to analyze: (a) the importance of stuff and thing classes in terms of their surface cover and how frequently they are mentioned in image captions; (b) the spatial relations between stuff and things, highlighting the rich contextual relations that make our dataset unique; (c) the performance of a modern semantic segmentation method on stuff and thing classes, and whether stuff is easier to segment than things.
1: http://calvin.inf.ed.ac.uk/datasets/coco-stuff

Event

Computer Vision and Pattern Recognition 2018

18/06/1822/06/18

Salt Lake City, United States

Event: Conference

Download statistics

No data available

ID: 64278720