Projects per year
Abstract
Facial expressions are a crucial aspect of nonverbal communication and often reflect underlying emotional states. Researchers often use facial emotion detection as a tool to gain insights into cognitive processes, emotional states and cognitive load. The conventional camera-based methods to sense human emotions are privacy intrusive, lack adaptability, and are sensitive to variability. These technologies have limited generalization and may not adapt well to variations in ambient lighting, facial landmark localization, facial occlusions and emotion intensity. Radio Frequency (RF) sensing offers promising avenues for improvement with contactless, non-invasive, privacy-preserving and reliable radar-based measurements. The proposed framework utilizes deep-learning techniques to classify facial micro-doppler signatures, generated from an ultra-wideband (UWB) radar. The method relies on continuous multi-level feature learning from radar time-frequency Doppler measurements. The spatiotemporal facial features are extracted from the radar data to train deep learning models. The proposed system achieves a high multiclass classification accuracy of 77% on the continuous streamed data covering basic emotions of anger, disgust, fear, happy, neutral and sadness. The system can transform next-generation multi-modal hearing aids with emotion-aware listening effort and cognitive load detection. This can be particularly useful in translating the emotion-assisted cognitive effort for real-time speech enhancement and personalized auditory experience.
Original language | English |
---|---|
Article number | 8002310 |
Journal | IEEE Transactions on Instrumentation and Measurement |
Volume | 74 |
Early online date | 6 Mar 2025 |
DOIs | |
Publication status | Published - 2025 |
Keywords / Materials (for Non-textual outputs)
- Cognitive load
- ResNet50
- VGG16
- VGG19
- radio frequency (RF) sensing
- squeeze net
- ultra-wideband (UWB) radar
Fingerprint
Dive into the research topics of 'Privacy-preserving Facial Emotion Classification with Visual Micro-Doppler Signatures for Hearing Aid Applications'. Together they form a unique fingerprint.Projects
- 1 Finished
-
COG-MHEAR: Towards cognitively-inspired 5G-IoT enabled, multi-modal Hearing Aids
Ratnarajah, T. (Principal Investigator) & Arslan, T. (Co-investigator)
1/03/21 → 28/02/25
Project: Research