Abstract / Description of output
Visualization researchers have been increasingly leveraging crowdsourcing approaches to overcome a number of limitations
of controlled laboratory experiments, including small participant sample sizes and narrow demographic backgrounds of study
participants. However, as a community, we have little understanding on when, where, and how researchers use crowdsourcing
approaches for visualization research. In this paper, we review the use of crowdsourcing for evaluation in visualization research.
We analyzed 190 crowdsourcing experiments, reported in 82 papers that were published in major visualization conferences
and journals between 2006 and 2017. We tagged each experiment along 36 dimensions that we identified for crowdsourcing
experiments. We grouped our dimensions into six important aspects: study design & procedure, task type, participants, measures
& metrics, quality assurance, and reproducibility. We report on the main findings of our review and discuss challenges and
opportunities for improvements in conducting crowdsourcing studies for visualization research.
Original language | English |
---|---|
Pages (from-to) | 573-595 |
Number of pages | 23 |
Journal | Computer Graphics Forum |
Volume | 37 |
Issue number | 3 |
Early online date | 10 Jul 2018 |
DOIs | |
Publication status | E-pub ahead of print - 10 Jul 2018 |
Keywords / Materials (for Non-textual outputs)
- Visualization
- Crowdsourcing
- survey