Context-sensitive evaluation of automatic speech recognition: Considering user experience & language variation

Nina Markl, Catherine Lai

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Commercial Automatic Speech Recognition (ASR) systems tend to show systemic predictive bias for marginalised speaker/user groups. We highlight the need for an interdisciplinary and context-sensitive approach to documenting this bias incorporating perspectives and methods from sociolinguistics, speech and language technology and human-computer interaction in the context of a case study. We argue evaluation of ASR systems should be disaggregated by speaker group, include qualitative error analysis, and consider user experience in a broader sociolinguistic and social context.
Original languageEnglish
Title of host publicationProceedings of the First Workshop on Bridging Human–Computer Interaction and Natural Language Processing
EditorsSu Lin Blodgett
PublisherACL Anthology
Pages34-40
Publication statusPublished - Apr 2021
EventBridging Human--Computer Interaction and Natural Language Processing Workshop: Workshop at EACL 2021 -
Duration: 19 Apr 202120 Apr 2021
https://sites.google.com/view/hciandnlp

Workshop

WorkshopBridging Human--Computer Interaction and Natural Language Processing Workshop
Abbreviated titleHCI + NLP
Period19/04/2120/04/21
Internet address

Keywords

  • ASR
  • HCI
  • sociolinguistics
  • speech technology
  • bias

Fingerprint

Dive into the research topics of 'Context-sensitive evaluation of automatic speech recognition: Considering user experience & language variation'. Together they form a unique fingerprint.

Cite this