Do semantic parts emerge in Convolutional Neural Networks?

Abel Gonzalez-garcia, Davide Modolo, Vittorio Ferrari

Research output: Contribution to journalArticlepeer-review


Semantic object parts can be useful for several visual recognition tasks. Lately, these tasks have been addressed using Convolutional Neural Networks (CNN), achieving outstanding results. In this work we study whether CNNs learn semantic parts in their internal representation. We investigate the responses of convolutional filters and try to associate their stimuli with semantic parts. We perform two extensive quantitative analyses. First, we use ground-truth part bounding-boxes from the PASCAL-Part dataset to determine how many of those semantic parts emerge in the CNN. We explore this emergence for different layers, network depths, and supervision levels. Second, we collect human judgements in order to study what fraction of all filters systematically fire on any semantic part, even if not annotated in PASCAL-Part. Moreover, we explore several connections between discriminative power and semantics. We find out which are the most discriminative filters for object recognition, and analyze whether they respond to semantic parts or to other image patches. We also investigate the other direction: we determine which semantic parts are the most
discriminative and whether they correspond to those parts emerging in the network. This enables to gain an even deeper understanding of the role of semantic parts in the network.
Original languageEnglish
Pages (from-to)476-494
Number of pages18
JournalInternational Journal of Computer Vision
Early online date17 Oct 2017
Publication statusE-pub ahead of print - 17 Oct 2017

Fingerprint Dive into the research topics of 'Do semantic parts emerge in Convolutional Neural Networks?'. Together they form a unique fingerprint.

Cite this