Computer Science
Speech Synthesis
100%
Models
86%
Emotion Recognition
74%
Contexts
67%
Speech Emotion Recognition
55%
Speech Recognition
46%
User
42%
Linguistics
38%
Detection
35%
Evaluation
34%
Application
29%
Acoustic Feature
28%
Text-to-Speech
26%
Representation
24%
Database
21%
Spoken Dialogue
19%
Segmentation
19%
Channels
19%
Personalities
19%
Communication
16%
Affective Computing
15%
Correlation
15%
Robot
15%
Annotation
15%
Broadcast News
14%
Multimodal Information
14%
Predictability
13%
User Experience
13%
Automatic Detection
13%
Query Language
13%
Events
12%
Synthetic Speech
12%
Semantics
11%
Error Correction
11%
Spontaneous Speech
10%
Standards
10%
Robotics
9%
Group Members
9%
Machine Translation
9%
Shared Decision
9%
Control
9%
Speech Understanding
9%
Group Interaction
9%
Sentiment Analysis
9%
Transfer Learning
9%
Decision-Making
9%
Controlled Study
9%
Potential Benefit
9%
Multimodal Interaction
9%
Virtual Training
9%
Arts and Humanities
Prosody
41%
Dialogue
29%
Utterance
28%
Perception
19%
Presupposition
19%
Speech
17%
Speech Production
16%
Lecture
16%
Words
15%
Language
15%
Corpus study
12%
Reported speech
9%
Edinburgh
9%
Empirical study
9%
Listeners
9%
Language differences
9%
Residents
9%
Diary
9%
Impact
9%
Redundancy
9%
Controlled
9%
Back-channel
9%
Quotation
9%
Monosodium Glutamate
9%
Uncanny
9%
Pragmatic functions
9%
Discourse markers
9%
Speech Synthesis
9%
Context
7%
Syllable
7%
Interdisciplinary
7%
Researchers
7%
Corpus
6%
Speaker
6%
Response
6%
Recording
6%
Lexical
5%
Online
5%
Linguistics
5%
Information
5%
Social Sciences
Languages
30%
Perception
22%
Sociolinguistics
14%
Experience
12%
Technology
12%
Listener
11%
Expectations
9%
Disfluency
9%
Evaluation
9%
Project
9%
COVID-19
9%
Interpretation
9%
Difference
9%
Language Modeling
9%
Communities
9%
Technology Development
9%
Emotion Recognition
9%
Analysis
7%
Groups
7%
Information
7%
Subject
6%
English Language
5%