Abstract / Description of output
Research and cultural heritage institutions have, in recent years, given increasing consideration to crowdsourcing in order to improve access to, and the quality of, their digital resources. Such crowdsourcing tasks take many forms, ranging from tagging, identifying, text-correcting, annotating, and transcribing information, often creating new data in the process. Those considering launching their own cultural heritage crowdsourcing initiative are now able to draw upon a rich body of evaluative research, dealing with the quantity of contributions made by volunteers, the motivations of those who participate in such projects, the establishment and design of crowdsourcing initiatives, and the public engagement value of so doing (Haythornthwaite, 2009; Dunn and Hedges, 2012; Causer and Wallace, 2012; Romeo and Blaser, 2011; Holley, 2009). Scholars have also sought to posit general models for successful crowdsourcing for cultural heritage, and attempts have also made to assess the quality of data produced through such initiatives (Noordegraaf et al, 2014; Causer and Terras, 18 2014b; Dunn and Hedges, 2013; McKinley, 2015; Nottamkandath et al, 2014). All of these studies are enormously important in understanding how to launch and run a successful humanities crowdsourcing programme. However, there is a shortage of detailed evaluations of whether or not humanities crowdsourcing—specifically crowdsourced transcription- produces data of a high enough standard to be used in scholarly work, and whether or not it is an economically viable and sustainable endeavour. Focusing upon the economics of humanities crowdsourcing may appear somewhat crass amidst discussions of its public engagement value, and of the opening up of research and resources to the wider community, but it is vital to have some idea of the economics of humanities crowdsourcing if cultural heritage institutions and research funding bodies—ever governed by budgets and bottom lines—are to be persuaded to support such (potentially) valuable initiatives.
This paper takes the award-winning crowdsourced transcription initiative, Transcribe Bentham, as its case study. We have, in a prior discussion about Transcribe Bentham, made some tentative findings in this regard, based upon data from 1,305 transcripts produced by volunteers between 1 October 2012 and 19 July 2013 (Causer and Terras, 2014b). The present paper expands upon, and moves beyond, these exploratory findings by introducing data from a further 3,059 transcripts, which were submitted between 20 July 2013 and 27 June 2014, all of which were produced by volunteers using an improved version of the Transcribe Bentham interface, the ‘Transcription Desk’. The additional data allows us to make conclusions about the impact of this improved interface, about which we could only earlier speculate. That these 4,364 transcripts were gathered over a period of twenty months, also allows us to identify long-term trends about the rate of volunteer participation and the quality of submissions.
This paper takes the award-winning crowdsourced transcription initiative, Transcribe Bentham, as its case study. We have, in a prior discussion about Transcribe Bentham, made some tentative findings in this regard, based upon data from 1,305 transcripts produced by volunteers between 1 October 2012 and 19 July 2013 (Causer and Terras, 2014b). The present paper expands upon, and moves beyond, these exploratory findings by introducing data from a further 3,059 transcripts, which were submitted between 20 July 2013 and 27 June 2014, all of which were produced by volunteers using an improved version of the Transcribe Bentham interface, the ‘Transcription Desk’. The additional data allows us to make conclusions about the impact of this improved interface, about which we could only earlier speculate. That these 4,364 transcripts were gathered over a period of twenty months, also allows us to identify long-term trends about the rate of volunteer participation and the quality of submissions.
Original language | English |
---|---|
Article number | DSH-2016-0036 |
Pages (from-to) | 1-21 |
Journal | Digital Scholarship in the Humanities |
Early online date | 15 Jan 2018 |
DOIs | |
Publication status | E-pub ahead of print - 15 Jan 2018 |
Keywords / Materials (for Non-textual outputs)
- crowd sourcing
- transcription
- economics
- user studies
- Bentham studies
- Digital Humanities
- digitisation
- XML