Crowdsourcing for Information Visualization: Promises and Pitfalls

Rita Borgo, Bongshin Lee, Benjamin Bach, Sara Fabrikant, Radu Jianu, Andreas Kerren, Stephen Kobourov, Fintan McGee, Luana Micallef, Tatiana von Landesberger, Katrin Ballweg, Stephan Diehl, Paolo Simonetto, Michelle Zhou

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract / Description of output

Crowdsourcing offers great potential to overcome the limitations of controlled lab studies. To guide future designs of crowdsourcing-based studies for visualization, we review visualization research that has attempted to leverage crowdsourcing for empirical evaluations of visualizations. We discuss six core aspects for successful employment of crowdsourcing in empirical studies for visualization – participants, study design, study procedure, data, tasks, and metrics & measures. We then present four case studies, discussing potential mechanisms to overcome common pitfalls. This chapter will help the visualization community understand how to effectively and efficiently take advantage of the exciting potential crowdsourcing has to offer to support empirical visualization research.
Original languageEnglish
Title of host publicationEvaluation in the Crowd: Crowdsourcing and Human-Centered Experiments
PublisherSpringer
Number of pages43
ISBN (Electronic)978-3-319-66435-4
ISBN (Print)978-3-319-66434-7
DOIs
Publication statusPublished - 28 Sept 2017

Fingerprint

Dive into the research topics of 'Crowdsourcing for Information Visualization: Promises and Pitfalls'. Together they form a unique fingerprint.

Cite this