Projects per year
Description
Experimental Setup:
Radar data was collected using the Novelda XeThru X4M03 UWB impulse radar.
Subjects performed expressions corresponding to six basic emotions in randomized sequences under ambient room conditions. The radar was placed at 0.5 m from the subject, capturing reflected RF signals, which were processed into micro-Doppler signatures.
Data Format and Organization:
File Format: Microsoft Access Table (MAT) format
Total Samples: 600
Emotion Classes:
-Anger
-Disgust
-Fear
-Happy
-Neutral
-Sadness
Each MAT table contains:
-Raw radar time-series
-Preprocessed STFT spectrograms
-Emotion label
-Sample duration and timestamp
Feature Extraction and Processing
1. Preprocessing:
Clutter removal via Moving Target Indication (MTI)
Butterworth filtering
Mapping to range-time intensity matrices
2. Spectrogram Generation:
STFT with window size = 256, overlap = 75%, FFT padding factor = 16
3. Spectrograms → Input for DL Models:
Converted into grayscale spectrogram images
Used as input for modified pre-trained CNNs (e.g., VGG, ResNet, SqueezeNet)
Applications and Use Cases:
-Emotion-aware hearing aids and assistive technology
-Real-time cognitive load detection and enhancement
-Contactless, privacy-preserving human-computer interaction
Radar data was collected using the Novelda XeThru X4M03 UWB impulse radar.
Subjects performed expressions corresponding to six basic emotions in randomized sequences under ambient room conditions. The radar was placed at 0.5 m from the subject, capturing reflected RF signals, which were processed into micro-Doppler signatures.
Data Format and Organization:
File Format: Microsoft Access Table (MAT) format
Total Samples: 600
Emotion Classes:
-Anger
-Disgust
-Fear
-Happy
-Neutral
-Sadness
Each MAT table contains:
-Raw radar time-series
-Preprocessed STFT spectrograms
-Emotion label
-Sample duration and timestamp
Feature Extraction and Processing
1. Preprocessing:
Clutter removal via Moving Target Indication (MTI)
Butterworth filtering
Mapping to range-time intensity matrices
2. Spectrogram Generation:
STFT with window size = 256, overlap = 75%, FFT padding factor = 16
3. Spectrograms → Input for DL Models:
Converted into grayscale spectrogram images
Used as input for modified pre-trained CNNs (e.g., VGG, ResNet, SqueezeNet)
Applications and Use Cases:
-Emotion-aware hearing aids and assistive technology
-Real-time cognitive load detection and enhancement
-Contactless, privacy-preserving human-computer interaction
Abstract
This dataset provides radar-based micro-Doppler measurements of facial expressions used for emotion classification in a privacy-preserving, contactless, and non-invasive manner. Data was captured using the XeThru X4M03 Ultra-Wideband (UWB) impulse radar sensor from 600 samples representing six basic emotions: anger, disgust, fear, happy, neutral, and sadness (100 samples each). Each sample corresponds to a 3-second radar signal stream, post-processed into time–frequency spectrograms using Short-Time Fourier Transform (STFT). The resulting spectrograms were used to train deep learning (DL) classifiers, including VGG16, VGG19, ResNet50, and SqueezeNet. The ResNet50 model achieved the highest overall classification accuracy of 77% across all emotion classes. The dataset is intended to support research in non-intrusive emotion recognition, especially for integration into multi-modal hearing aids and cognitive-assistive technologies.
Data Citation
Usman Anwar, Yinhuan Dong, Tughrul Arslan, "Privacy-Preserving Facial Emotion Classification Dataset Using Visual Micro-Doppler Signatures from Ultra-Wideband (UWB) Radar", IEEE Dataport, July 23, 2025, doi:10.21227/9jge-dh35
| Date made available | 23 Jul 2025 |
|---|---|
| Publisher | IEEE Dataport |
Projects
- 1 Active
-
COG-MHEAR: Towards cognitively-inspired 5G-IoT enabled, multi-modal Hearing Aids
Ratnarajah, T. (Principal Investigator), Arslan, T. (Co-investigator) & Ratnarajah, T. (Co-investigator)
Engineering and Physical Sciences Research Council
1/03/21 → 28/02/26
Project: Research