Abstract / Description of output
A prototype will be demonstrated that takes activity and sensor data from Autonomous Underwater Vehicles (AUVs) and automatically generates multimodal output in the form of mission reports containing natural language and visual elements. Specifically, the system takes time-series sensor data, mission logs, together with mission plans as its input, and generates descriptions of the missions in natural language, which would be verbalised by a Text-to-Speech Synthesis (TTS) engine in a multimodal system. In addition, we will demonstrate an in-mission system that provides a stream of real-time updates in natural language, thus improving situation awareness of the operator and increasing trust in the system during missions.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2016 International Conference on Multimodal Interaction |
Publisher | ACM Association for Computing Machinery |
Pages | 404-405 |
Number of pages | 2 |
ISBN (Electronic) | 9781450345569 |
DOIs | |
Publication status | Published - 31 Oct 2016 |
Event | 18th ACM International Conference on Multimodal Interaction - Tokyo, Japan Duration: 12 Nov 2016 → 16 Nov 2016 Conference number: 18 https://icmi.acm.org/2016/index.php?id=home |
Conference
Conference | 18th ACM International Conference on Multimodal Interaction |
---|---|
Abbreviated title | ICMI 2016 |
Country/Territory | Japan |
City | Tokyo |
Period | 12/11/16 → 16/11/16 |
Internet address |
Keywords / Materials (for Non-textual outputs)
- Multimodal output
- natural language generation
- autonomous systems