submissions had been received just in the previous 24 hours. The top-rankingsubmission so far showed an improvement over the current algorithm by 8.5%, stillshort of the 10% goal, but enough to earn the submitting team a $50,000 “progressprize.”
With extreme numbers of contributors, it is possible to use the crowd to handle boththe distribution of a problem as well as redundant tasking. Two now famous HITs that were created on the AMT were searches for missing people using very recent satelliteimages and the ability of people to analyze images and understand them. The mostrecent, the search for adventurer Steve Fossett, who went missing in a 17,000 squaremile region of Nevada, had more than 50,000 volunteers scouring satellite imagery forFossett’s light aircraft. The tasks were distributed such that each image was analyzed by ten different users, improving the overall quality of the analysis – especially importantin an application such as this.
Crowdsourced tasks that do not offer any sort of directcompensation or incentive, such as the search for Fossett, have the advantage over othertasks in that those contributing their time are very likely doing it because they
todo it, and more often than not are fairly passionate about the task. Naturally, thisgenerally results in higher quality contributions, or at the very least “power users” whocontribute significant amounts to the project. One participant, Andy Chantrill,in thesearch for Fossett analyzed over 5,000 images of 278-square foot sections over a periodof 30 hours, at one time working for 13 hours straight. In an interview with Steve Friessof Wired.com, self-described Fossett admirer Chantrill noted in an example of altruismthat, "Whether they were [useful] or not, I don't know and will perhaps never know, but
“Netflix Prize Leaderboard.”
5 December 2007. <http://www.netflixprize.com/leaderboard>. (5December 2007).
Friess, Steve. “50,000 Volunteers Join Distributed Search for Steve Fossett.”
. 11 September 2007.<http://www.wired.com/software/webservices/news/2007/09/distributed_search>. (5 December2007).
I participated myself, analyzing a little over 1,000 images and flagging two. The analysis of the imagesdid invoke a feeling of being part of a large computer, as well as a sense of futility as the number of available images kept
, since the images were more often than not nothing but a patch of brownor grey and the process became very tedious. However, the entire endeavor demonstrated the power of crowdsourcing, as the entire 17,000 square mile area was examined in its entirety at least once, 278square feet at a time.