Privacy-preserving Facial Emotion Classification with Visual Micro-Doppler Signatures for Hearing Aid Applications

Usman Anwar, Yinhuan Dong, Tughrul Arslan, Kia Dashtipour, Mandar Gogate, Amir Hussain, Qammer H. Abbasi, Muhammad Ali Imran, Tom Russ, Peter Lomax

Research output: Contribution to journalArticlepeer-review

Abstract

Facial expressions are a crucial aspect of nonverbal communication and often reflect underlying emotional states. Researchers often use facial emotion detection as a tool to gain insights into cognitive processes, emotional states and cognitive load. The conventional camera-based methods to sense human emotions are privacy intrusive, lack adaptability, and are sensitive to variability. These technologies have limited generalization and may not adapt well to variations in ambient lighting, facial landmark localization, facial occlusions and emotion intensity. Radio Frequency (RF) sensing offers promising avenues for improvement with contactless, non-invasive, privacy-preserving and reliable radar-based measurements. The proposed framework utilizes deep-learning techniques to classify facial micro-doppler signatures, generated from an ultra-wideband (UWB) radar. The method relies on continuous multi-level feature learning from radar time-frequency Doppler measurements. The spatiotemporal facial features are extracted from the radar data to train deep learning models. The proposed system achieves a high multiclass classification accuracy of 77% on the continuous streamed data covering basic emotions of anger, disgust, fear, happy, neutral and sadness. The system can transform next-generation multi-modal hearing aids with emotion-aware listening effort and cognitive load detection. This can be particularly useful in translating the emotion-assisted cognitive effort for real-time speech enhancement and personalized auditory experience.
Original languageEnglish
Article number8002310
JournalIEEE Transactions on Instrumentation and Measurement
Volume74
Early online date6 Mar 2025
DOIs
Publication statusPublished - 2025

Keywords / Materials (for Non-textual outputs)

  • Cognitive load
  • ResNet50
  • VGG16
  • VGG19
  • radio frequency (RF) sensing
  • squeeze net
  • ultra-wideband (UWB) radar

Fingerprint

Dive into the research topics of 'Privacy-preserving Facial Emotion Classification with Visual Micro-Doppler Signatures for Hearing Aid Applications'. Together they form a unique fingerprint.

Cite this