Automatic annotation of tennis games: An integration of audio, vision, and learning

Fei Yan, Josef Kittler, David Windridge, William Christmas, Krystian Mikolajczyk, Stephen Cox, Qiang Huang

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

Abstract Fully automatic annotation of tennis game using broadcast video is a task with a great potential but with enormous challenges. In this paper we describe our approach to this task, which integrates computer vision, machine listening, and machine learning. At the low level processing, we improve upon our previously proposed state-of-the-art tennis ball tracking algorithm and employ audio signal processing techniques to detect key events and construct features for classifying the events. At high level analysis, we model event classification as a sequence labelling problem, and investigate four machine learning techniques using simulated event sequences. Finally, we evaluate our proposed approach on three real world tennis games, and discuss the interplay between audio, vision and learning. To the best of our knowledge, our system is the only one that can annotate tennis game at such a detailed level.
Original languageEnglish
Pages (from-to)896-903
Number of pages8
JournalImage and vision computing
Issue number11
Publication statusPublished - Nov 2014

Keywords / Materials (for Non-textual outputs)

  • Hidden Markov model


Dive into the research topics of 'Automatic annotation of tennis games: An integration of audio, vision, and learning'. Together they form a unique fingerprint.

Cite this