Information Visualization Evaluation Using Crowdsourcing

Rita Borgo, Luana Micallef, Benjamin Bach, Fintan McGee, Bongshin Lee

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

Visualization researchers have been increasingly leveraging crowdsourcing approaches to overcome a number of limitations of controlled laboratory experiments, including small participant sample sizes and narrow demographic backgrounds of study participants. However, as a community, we have little understanding on when, where, and how researchers use crowdsourcing approaches for visualization research. In this paper, we review the use of crowdsourcing for evaluation in visualization research. We analyzed 190 crowdsourcing experiments, reported in 82 papers that were published in major visualization conferences and journals between 2006 and 2017. We tagged each experiment along 36 dimensions that we identified for crowdsourcing experiments. We grouped our dimensions into six important aspects: study design & procedure, task type, participants, measures & metrics, quality assurance, and reproducibility. We report on the main findings of our review and discuss challenges and opportunities for improvements in conducting crowdsourcing studies for visualization research.
Original languageEnglish
Pages (from-to)573-595
Number of pages23
JournalComputer Graphics Forum
Issue number3
Early online date10 Jul 2018
Publication statusE-pub ahead of print - 10 Jul 2018

Keywords / Materials (for Non-textual outputs)

  • Visualization
  • Crowdsourcing
  • survey


Dive into the research topics of 'Information Visualization Evaluation Using Crowdsourcing'. Together they form a unique fingerprint.

Cite this