I ran a half-day workshop on ‘Designing successful digital humanities crowdsourcing projects’ at the Digital Humanities 2013 conference in sunny Lincoln, Nebraska.
I’ve started a braindump of ’emerging best practice’ tips and questions from the workshop below…
Tips for designing humanities crowdsourcing projects
- No worms without eggs. One participant reviewed Zooniverse’s Worm Watch, but got bored and moved on after watching six videos in which the worm didn’t lay an egg. The moral of this story is to give new participants a task item with a known payload first. You may want to do this anyway to check the quality of their contribution (e.g. testing for accidental or intentionally bad data that doesn’t match the known value) or test their skills, but another reason to do it is to give participants an early win to inspire them.
- Go small to solve problems. Microtasks seem to trump bigger tasks, and breaking a design problem into the smallest possible parts is a good way to get unstuck. Once you’ve decomposed it, you can look for creative solutions to the little problems you’ve uncovered as part of the bigger one.
- Pilot your idea on existing platforms – put photos on Flickr and ask people to transcribe, identify or tag them, ask people to map things on HistoryPin or Ushahidi, etc.
- Learn by doing, not by reading – as far as possible, show people what they need to do rather than making them wade through text before starting.
- A strong tagline makes a big difference. It’ll take time to find the strapline that perfectly summarises the what and why of your project – keep at it until you’ve got a compelling description that’ll make people want to take part in your project.
- If you can’t think of a great tagline, it might mean your project isn’t making an important contribution – if you can’t answer the question ‘who’s desperate to use the results of this project?’, maybe the project shouldn’t go ahead.
- The barbarians aren’t at the gates. In online projects, credentials derive from actions, not institutional affiliation. Borrow data validation models from citizen science and judge people’s input on its correctness, not on whether they’re an ‘amateur’ or an invited contributor. If ordinary people can transcribe ancient Greek letters, they can probably manage your task. I know it’s quite difficult for academic projects to cope with this, but think of it like learning to ride a bike without training wheels – perhaps start by inviting known communities to help you test your project, then invite related external groups, refine your models, then take off those training wheels and open it up!
Sources and further reading are listed at ‘Resources for ‘Crowdsourcing in Libraries, Museums and Cultural Heritage Institutions” so check there for sources and further reading.
If you found this post useful, you might be interested in my book, Crowdsourcing Our Cultural Heritage.
Designing successful digital humanities crowdsourcing projects by Mia Ridge is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
Based on a work at https://www.miaridge.com/workshop-designing-successful-digital-humanities-crowdsourcing-projects/.