A demonstration of multimodal debrief generation for AUVs, post-mission and in-mission

Helen Hastie, Xingkun Liu, Pedro Patron

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

A prototype will be demonstrated that takes activity and sensor data from Autonomous Underwater Vehicles (AUVs) and automatically generates multimodal output in the form of mission reports containing natural language and visual elements. Specifically, the system takes time-series sensor data, mission logs, together with mission plans as its input, and generates descriptions of the missions in natural language, which would be verbalised by a Text-to-Speech Synthesis (TTS) engine in a multimodal system. In addition, we will demonstrate an in-mission system that provides a stream of real-time updates in natural language, thus improving situation awareness of the operator and increasing trust in the system during missions.
Original languageEnglish
Title of host publicationProceedings of the 2016 International Conference on Multimodal Interaction
PublisherACM Association for Computing Machinery
Pages404-405
Number of pages2
ISBN (Electronic)9781450345569
DOIs
Publication statusPublished - 31 Oct 2016
Event18th ACM International Conference on Multimodal Interaction - Tokyo, Japan
Duration: 12 Nov 201616 Nov 2016
Conference number: 18
https://icmi.acm.org/2016/index.php?id=home

Conference

Conference18th ACM International Conference on Multimodal Interaction
Abbreviated titleICMI 2016
Country/TerritoryJapan
CityTokyo
Period12/11/1616/11/16
Internet address

Keywords / Materials (for Non-textual outputs)

  • Multimodal output
  • natural language generation
  • autonomous systems

Fingerprint

Dive into the research topics of 'A demonstration of multimodal debrief generation for AUVs, post-mission and in-mission'. Together they form a unique fingerprint.

Cite this