Difference between revisions of "Crowdsourcing Bib - RGC Grant"
Line 26: | Line 26: | ||
'''''<i>Methods Articles</i>''''' | '''''<i>Methods Articles</i>''''' | ||
+ | |||
+ | * '''Causer, T., Tonra, J., & Wallace, V. 2012. [https://academic-oup-com.libdata.lib.ua.edu/dsh/article/27/2/119/930670?searchresult=1 Transcription maximized; expense minimized? Crowdsourcing and editing The Collected Works of Jeremy Bentham].''' Literary and Linguistic Computing, 27(2)2, 119–137. doi.org/10.1093/llc/fqs004. | ||
+ | <block quote> This report is framed around five key questions: is the crowdsourcing of manuscript transcription cost effective; is it exploitative; is the product high quality; is the project sustainable and the product permanent; and how can success be measured. It provide a review of major crowdsourcing projects to extract data, correct OCR, and transcribe indexes while noting that the transcription of manuscript collections is more complicated. The meat of the article addresses conceptualization and execution of the project including: development of the transcription tool (a customized MediaWiki allowing for encoding of linguistic and bibliographic data using TEI XML tags); the contributions of transcribers, which were significant; moderation by staff, which was intensive; and assessment of impact and success. A large portion of the budget was spent on digitizing materials in high resolution so that they could be used by transcribers and also on employing full-time staff to moderate volunteer transcriptions and provide feedback – both important points to consider in designing and managing a project like this on any scale. Finally, this project employed a user survey to gauge factors that either motivated or discouraged volunteers, which provides a useful model of an assessment instrument for crowdsourcing feasibility in LAM environments.</block quote> | ||
+ | |||
+ | |||
+ | * '''Mika, K., DeVeer, J., & Rinaldo, R. 2017. [https://journals.ku.edu/jbi/article/view/6646 Crowdsourcing natural history archives: Tools for extracting transcriptions and data].''' Biodiversity Informatics, 12. | ||
* '''Thomer, A., Vaidya, G. Guralnick, R. Bloom, D., & Russel, L. 2012. [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3406479/ From documents to datasets: A MediaWiki-based method of annotating and extracting species observations in century-old field notebooks].''' ZooKeys, 209, 235-253. doi.org/10.3897/zookeys.209.3247. | * '''Thomer, A., Vaidya, G. Guralnick, R. Bloom, D., & Russel, L. 2012. [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3406479/ From documents to datasets: A MediaWiki-based method of annotating and extracting species observations in century-old field notebooks].''' ZooKeys, 209, 235-253. doi.org/10.3897/zookeys.209.3247. | ||
− | + | ||
− | |||
Revision as of 22:15, 16 June 2019
(Temporary spot until account is established on other wiki)
Research Study
Transcribing Handwritten Records
- How to Transcribe from the Library of Congress
- Transcription Tips from The National Archives
- Instructions for Volunteers from the Smithsonian Transcription Center
- Transcription Guidelines from Transcribe Bentham Project
- Transcription Tips from DIY History at U Iowa
- Europeana Transcription Tutorial
Data Extraction and Using Spreadsheets
- Old Weather Transcribing Guide
- Transcribing between the lines: Crowd-sourcing historic data collection
- Natural History Museum of Utah Catalog Notes
- Midwives Register
Transcription Tools
Methods Articles
- Causer, T., Tonra, J., & Wallace, V. 2012. Transcription maximized; expense minimized? Crowdsourcing and editing The Collected Works of Jeremy Bentham. Literary and Linguistic Computing, 27(2)2, 119–137. doi.org/10.1093/llc/fqs004.
<block quote> This report is framed around five key questions: is the crowdsourcing of manuscript transcription cost effective; is it exploitative; is the product high quality; is the project sustainable and the product permanent; and how can success be measured. It provide a review of major crowdsourcing projects to extract data, correct OCR, and transcribe indexes while noting that the transcription of manuscript collections is more complicated. The meat of the article addresses conceptualization and execution of the project including: development of the transcription tool (a customized MediaWiki allowing for encoding of linguistic and bibliographic data using TEI XML tags); the contributions of transcribers, which were significant; moderation by staff, which was intensive; and assessment of impact and success. A large portion of the budget was spent on digitizing materials in high resolution so that they could be used by transcribers and also on employing full-time staff to moderate volunteer transcriptions and provide feedback – both important points to consider in designing and managing a project like this on any scale. Finally, this project employed a user survey to gauge factors that either motivated or discouraged volunteers, which provides a useful model of an assessment instrument for crowdsourcing feasibility in LAM environments.</block quote>
- Mika, K., DeVeer, J., & Rinaldo, R. 2017. Crowdsourcing natural history archives: Tools for extracting transcriptions and data. Biodiversity Informatics, 12.
- Thomer, A., Vaidya, G. Guralnick, R. Bloom, D., & Russel, L. 2012. From documents to datasets: A MediaWiki-based method of annotating and extracting species observations in century-old field notebooks. ZooKeys, 209, 235-253. doi.org/10.3897/zookeys.209.3247.
Feasibility Study
- McKinley, D. 2012. Practical Management strategies for crowdsourcing in libraries, archives and museums.
Background Readings
- Schenk, E. & Guittard, C. (2011). Towards a characterization of crowdsourcing practices. Journal of Innovation Economics & Management, 7(1), 93-107. doi:10.3917/jie.007.0093.
According to this article, what we are doing can be characterized as a form of "content crowdsourcing" (CS) that they refer to as "Integrative Crowdsourcing":
Integrative CS will be relevant when the client firm seeks to build data or information bases. Therefore Integrative CS is a form of content Crowdsourcing. While gathering information or data at an individual’s level can be unproblematic, building a data base generally requires significant amounts of resources. The rationale of integrative CS therefore lies in the cost of building large data or information bases. Since individuals within the crowd are heterogeneous, Crowdsourcing enables the client firm to gather a variety of contents. The firm seeking to implement integrative CS should however be aware of integration issues. Data or information stemming from various origins might be incompatible or redundant if no precaution is taken. Precautions include the definition of a data format and the sound selection of data sources. [Emphasis added]
There are other relevant sections of this paper including:
- A way to distinguish 1992 data transcription tasks and the more complex cognitive efforts required for data extraction from the written articles from the 1961 season. See these sections of the paper:
- Crowdsourcing of simple tasks
- Crowdsourcing of complex tasks
- Discussion of incentives in this section: Discussion: benefits and pitfalls
- Simperl, E. (2015). How to Use Crowdsourcing Effectively: Guidelines and Examples. LIBER Quarterly, 25(1), 18–39. DOI: http://doi.org/10.18352/lq.9948
This article has a great Guide to Crowdsourcing!!! (it's a subsection near the end of the article)
- Brabham, D.C. (2013). Crowdsourcing (book). MIT Press. Available as an ebook through UA Libraries
This should be one of our "go to" resources. It's a short book. This quote is indicative of its usefulness:
Crowdsourcing is a problem-solving model because it enables an organization confronted with a problem and desiring a goal state to scale up the task environment dra-matically and enlarge the solver base by opening up the problem to an online community through the Internet.
This quote is from Chapter 1 of the book
- Terras, M.M. (2016). Crowdsourcing in the Digital Humanities In Schreibman, S., Siemens, R., and Unsworth, J. (eds), (2016) "A New Companion to Digital Humanities", (p. 420 –439). Wiley-Blackwell.
This chapter has a section that is a nice background discussion of historical document transcription.