ROSMI: A Multimodal Corpus for Map-based Instruction-Giving

Miltiadis Marios Katsakioris, Ioannis Konstas, Pierre-Yves Mignotte, Helen Hastie

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

We present the publicly-available Robot Open Street Map Instructions (ROSMI) corpus: a rich multimodal dataset of map and natural language instruction pairs that was collected via crowdsourcing. The goal of this corpus is to aid in the advancement of state-of-the-art visual-dialogue tasks, including reference resolution and robot-instruction understanding. The domain described here concerns robots and autonomous systems being used for inspection and emergency response. The ROSMI corpus is unique in that it captures interaction grounded in map-based visual stimuli that is both human-readable but also contains rich metadata that is needed to plan and deploy robots and autonomous systems, thus facilitating human-robot teaming.
Original languageEnglish
Title of host publicationICMI '20: Proceedings of the 2020 International Conference on Multimodal Interaction
Place of PublicationUnited States
PublisherACM Association for Computing Machinery
Pages680-684
Number of pages5
ISBN (Electronic)9781450375818
DOIs
Publication statusPublished - 21 Oct 2020
EventICMI '20: International Conference on Multimodal Interaction.
Virtual Event
- , Netherlands
Duration: 25 Oct 202029 Oct 2020

Conference

ConferenceICMI '20: International Conference on Multimodal Interaction.
Virtual Event
Abbreviated title22nd ICMI 2020
Country/TerritoryNetherlands
Period25/10/2029/10/20

Keywords / Materials (for Non-textual outputs)

  • crowdsourcing
  • data collection
  • dialogue system
  • human-robot interaction
  • multimodal

Fingerprint

Dive into the research topics of 'ROSMI: A Multimodal Corpus for Map-based Instruction-Giving'. Together they form a unique fingerprint.

Cite this