Designing Speech and Language Interactions

Cosmin Munteanu, Matt Jones, Steve Whittaker, Sharon Oviatt, Matthew Aylett, Gerald Penn, Stephen Brewster, Nicolas d'Alessandro

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Speech and natural language remain our most natural forms of interaction; yet the HCI community have been very timid about focusing their attention on designing and developing spoken language interaction techniques. While significant efforts are spent and progress made in speech recognition, synthesis, and natural language processing, there is now sufficient evidence that many real-life applications using speech technologies do not require 100% accuracy to be useful. This is particularly true if such systems are designed with complementary modalities that better support their users or enhance the systems' usability. Engaging the CHI community now is timely -- many recent commercial applications, especially in the mobile space, are already tapping the increased interest in and need for natural user interfaces (NUIs) by enabling speech interaction in their products. This multidisciplinary, one-day workshop will bring together interaction designers, usability researchers, and general HCI practitioners to analyze the opportunities and directions to take in designing more natural interactions based on spoken language, and to look at how we can leverage recent advances in speech processing in order to gain widespread acceptance of speech and natural language interaction.
Original languageEnglish
Title of host publicationCHI '14 Extended Abstracts on Human Factors in Computing Systems
Place of PublicationNew York, NY, USA
Number of pages4
ISBN (Print)978-1-4503-2474-8
Publication statusPublished - 2014


  • automatic speech recognition, natural language processing, natural user interfaces, speech and language interaction


Dive into the research topics of 'Designing Speech and Language Interactions'. Together they form a unique fingerprint.

Cite this