Projects per year
Abstract / Description of output
Situational awareness is achieved naturally by the human senses of sight and hearing in combination. Automatic scene understanding aims at replicating this human ability using microphones and cameras in cooperation. In this paper, audio and video signals are fused and integrated at different levels of semantic abstractions. We detect and track a speaker who is relatively unconstrained i.e. free to move indoors within an area larger than the comparable reported work, which is usually limted to round table meetings. The system is relatively simple: consisting of just 4 microphone pairs and a single camera. Results show that the overall multimodal tracker is more reliable than single modality systems, tolerating large occlusions and cross-talk. System evaluation is performed on both single and multi-modality tracking. The performance improvement given by the audio-video integration and fusion, is quantified in terms of tracking precision and accuracy as well as speaker diarisation error rate and precision-recall (recognition). Improvements vs. the closest works are evaluated: 56% sound source localisation computational cost over an audio only system, 8% speaker diarisation error rate over an audio only speaker recognition unit and 36% on the precision-recall metric over an audio-video dominant speaker recognition method.
Original language | English |
---|---|
Pages (from-to) | 137–149 |
Journal | Signal Processing |
Volume | 129 |
Early online date | 4 Jun 2016 |
DOIs | |
Publication status | Published - Dec 2016 |
Fingerprint
Dive into the research topics of 'Robust Indoor Speaker Recognition in a Network of Audio and Video Sensors'. Together they form a unique fingerprint.Projects
- 1 Finished
-
Signal Processing in the Networked Battlespace
Mulgrew, B., Davies, M., Hopgood, J. & Thompson, J.
1/04/13 → 30/06/18
Project: Research
Profiles
-
James Hopgood
- School of Engineering - Personal Chair of Statistical Signal Processing
- Acoustics and Audio Group
Person: Academic: Research Active