Capturing Temporal Information in a Single Frame: Channel Sampling Strategies for Action Recognition

Kiyoon Kim, Shreyank Narayana Gowda, Oisin Mac Aodha, Laura Sevilla-Lara

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We address the problem of capturing temporal information for video classification in 2D networks, without increasing computational cost. Existing approaches focus on modifying the architecture of 2D networks (e.g. by including filters in the temporal dimension to turn them into 3D networks, or using optical flow, etc.), which increases computation cost. Instead, we propose a novel sampling strategy, where we re-order the channels of the input video, to capture short-term frame-to-frame changes. We observe that without bells and whistles, the proposed sampling strategy improves performance on multiple architectures (e.g. TSN, TRN, and TSM) and datasets (CATER, Something-Something-V1 and V2), up to 24% over the baseline of using the standard video input. In addition, our sampling strategies do not require training from scratch and do not increase the computational cost of training and testing. Given the generality of the results and the flexibility of the approach, we hope this can be widely useful to the video understanding community.
Original languageEnglish
Title of host publicationProceedings of The 33rd British Machine Vision Conference (BMVC 2022)
PublisherBMVA Press
Number of pages9
Publication statusPublished - 25 Nov 2022
EventThe 33rd British Machine Vision Conference, 2022 - London, United Kingdom
Duration: 21 Nov 202224 Nov 2022
Conference number: 33
https://www.bmvc2022.org/

Conference

ConferenceThe 33rd British Machine Vision Conference, 2022
Abbreviated titleBMVC 2022
Country/TerritoryUnited Kingdom
CityLondon
Period21/11/2224/11/22
Internet address

Fingerprint

Dive into the research topics of 'Capturing Temporal Information in a Single Frame: Channel Sampling Strategies for Action Recognition'. Together they form a unique fingerprint.

Cite this