This action might not be possible to undo. Are you sure you want to continue?
introduces the concept of verification, how it has evolved at Ushahidi and in sample deployments, alternative ways of thinking about verification and some suggestions for further research. With over 20,000 installations of Ushahidi and Crowdmap since January, 2009, Ushahidi has been used in a number of different contexts – from earthquake support in Haiti, to reports of sexism in Egypt, to election monitoring in the Sudan. In each of these cases, a map is publicized and individuals are encouraged to send reports to it. The process of verifying information reported by the crowd has taken on a variety of different forms depending on the needs and affordances of the environment and the community supporting it. Ushahidi support for verification has until now been limited to a fairly simple backend categorisation system by which administrators tag reports as “verified” or “unverified”. But this is proving unmanageable for large quantities of data and may not be the most effective way of portraying the nuanced levels of verification that can practically be achieved with crowdsourced data. What research needs to be done to test verification alternatives so that Ushahidi and Crowdmap deployers are provided with due diligence tools that can advance trust in their deployments? Can we do this in a way that doesn’t add any new barriers to entry to those who need to have their voice heard on Ushahidi? How can we ensure that this solution is as close as possible to the needs, incentive systems and motivations of deployers and users? What is the next step for Ushahidi verification? 1. 2. 3. 4. 5. The origins of the “verified/unverified” categories How verification works on Ushahidi How others “do” verification Designing for verification Further research verify
1. To prove the truth of by presentation of evidence or testimony; substantiate. To determine or test the truth or accuracy of, as by comparison, investigation, or reference: experiments that verified the hypothesis. See Synonyms at confirm.
1. The origins of the “verified/unverified” categories On the 27th of December 2007, President Mwai Kibaki was declared the winner of the hotly contested Kenyan presidential elections. Citing widespread electoral irregularities, members of the opposition protested. Peaceful protests soon turned into targeted ethnic violence as a wave of violence swept through the country. Kenyan lawyer, blogger and technology policy expert, Ory
American Heritage® Dictionary of the English Language, Fourth Edition
Special thanks to Jessica Heinzelman, David Kobia, Nigel McNie, Tim McNamara, Ari Olmos, Kamal Sedra, Zein for their input on how verification is being practiced in their projects. 1
Okolloh, was in the country at the time of the violence and was providing on-theground citizen reporting from Nairobi through her blog, one of the main sources of information at the time2. She initially made a request for people to send her stories that were going unreported by the media (there was a government ban on local news media at the time) but quickly became overwhelmed by the volume of reports. When she reluctantly left for Johannesburg with her family on January 3, Okolloh made a plea to her readers and friends to build tools to help document what was happening in the country. The Internet was still available and so Okolloh made two suggestions to help continue the work that she had been doing before she left.
(I)t also occurs to me that we have no reliable figures of the real death tolls on the ground. Perhaps we can begin to collect information from organizations and individuals on the ground e.g. red cross, hospitals, etc. and start to build a tally online, preferably with names. Most of the people losing their lives will remain nameless, and it might be worthwhile to at least change that. Any volunteers/ideas? 3
One of the developers who responded to Okolloh’s post was David Kobia, a Kenyan living in Atlanta in the U.S. who was relieved to have an opportunity to provide some assistance. Kobia, developed the platform and called it Ushahidi (meaning “witness” in Kiswahili) enabling people to send in reports of what was happening on the ground via SMS or the website platform. Kobia recalls that the need for verification emerged only two or three days after the site had been launched. “There was a degree of naivety when you start with five reports, but as you get inundated with 500 text messages, then you think that there needs to be some verification process in place… Getting verified information becomes really critical during crises like Kenya. This was really problematic because people were sending text messages to start rumors… An example would be something like: "Some politician has been assassinated". This could have a serious reverberating effect and so it was important to be sensitive to the situation. You had to vet information and go back and overlay it with mainstream media… We ended up verifying fewer and fewer reports and putting less up on the map.” 4 According to Kobia, Ushahidi worked with NGOs, aid agencies and volunteers on the ground to verify reports. When the violence subsided and others wanted to use the tools, the Ushahidi team ramped up development to enable others to use the open source software in their own, independent deployments and then developed Crowdmap, a platform that enables users to set up their own deployment of Ushahidi without having to install it on a web server.
Okolloh, O. R. Y. (2009). Ushahidi, or “testimony”: Web 2.0 tools for crowdsourcing crisis information 9. Participatory Learning and Action, 59(1), 65-70.
Ory Okolloh, “Kenyan Pundit”, 3 January, 2008 http://www.kenyanpundit.com/2008/01/03/updatejan-3-445-1100-pm/ 4 Interview, 22 September 2011
2. How verification works on Ushahidi and Crowdmap The way that verification currently works in Ushahidi and Crowdmap is that reports are pushed and/or pulled into a map either by an individual typing up a report and sending it to a specific deployment via the website or by SMS and/or by organisations and individuals’ reports being pulled into the map by virtue of the hash tags that they contain on Twitter. Reports are not automatically visible on the map but need to be approved by the deployment team. After they have been approved, reports become visible on the public map, and are by default marked as “unverified” until they have been checked as “verified” by the deployment team. Reports that the deployers believe are inaccurate are checked as “unverified” before they leave the verification queue. In November last year, the ICT4Peace Foundation worked with Ushahidi to develop the Matrix plugin for the Ushahidi platform. The plugin enables deployers on the back-end to make judgements about the source reliability and information probability for each report by choosing from a dropdown menu as seen below. The plugin assumes the presence of trained reporters in the field working from criteria set by deploying organisations and consequently a separation between those reporting and those analysing the data (this is not always the case with Ushahidi deployments). The result is a matrix of reports highlighted according to the probability that they are accurate.
Image: An example of the Matrix too in use on the site Uchaguzi TZ (from http://blog.ushahidi.com/index.php/2010/11/04/analysis-plugin-ict4peace-supported-tool-for-ushahidideployers/)
Ushahidi deployers using the verification functionality recognize that much of the work that they need to do happens outside the platform – phoning people on the
ground near reported incidents, checking media reports, and investigating those who have sent in reports. The amount of due diligence in performing information verification differs extensively among deployers – from quick, surface level analyses of reports, to extensive offline investigation5. Since the effort required to mark something as verified or not is simply a matter of checking a box, the verification process effectively takes place independent of Crowdmap/Ushahidi in the time between when the report moves from the “approved” to the “verified” queues. In the case of U-shahid, the 2010 Egyptian elections monitoring project, verifiers were trained by a media professional from Reuters to assess whether reports were accurate or not. Project Lead, Kamal Sedra, said that they sent members of their team to places where activity was being reported to check to see whether it was happening or not. In the U-shahid case, the main problem was faced not because of misinformation but rather in an attempt to bring down the system when vandals attempted to post 260,000 photos to U-shahid.com6. In the Sudan VoteMonitor case, the deploying organisation had access to independent observers on the ground to verify information. They used mobile phones to call those who had submitted reports via SMS, and provided the necessary analysis of the reports in order to provide an accurate picture of events on the ground. When, on the second day, the team started to see reports that were suspiciously pro-government, they called relevant election centers or other monitors around the country to check whether what people were saying in their reports was accurate or not. In some cases, the team called the people sending in SMS reports in order to obtain enough information to verify them. Suspicious reports were approved (and were therefore still visible) but were tagged (or left) as “unverified”. According to Zein, 86% of the 257 reports were verified by the end of the project. Verification becomes an area of contestation when different stakeholders with varying levels of reputation and risk are involved in a project. According to Nigel McNie who worked on the Christchurch Recovery Map in the aftermath of the New Zealand earthquake, the government officials with whom they were working \were concerned about endorsing the project when they weren’t an active player in the verification process. “We had to work to try and convince the officials that the map was a resource they should be endorsing, and as I recall it, one of their chief objections was the "verified" status of the reports - they didn't want to endorse a map that had "verified" reports that weren't actually verified by them. There's no way they could have verified the reports, of course, although they did suggest the addition of "government verified" as a category. I don't believe we implemented this before the map closed.7” In this case, the “verified” tag becomes a mark of authority, a site of struggle between competing interests and a locus of reputation and trust for those involved in the effort.
Patrick Meier has written extensively about this on his blog, for example at http://irevolution.net/2011/06/21/information-forensics. 6 Interview Kamal Sedra, 30 June, 2011 7 Email correspondence, 2 October, 2011 4
For volunteers working to set up Ushahidi or Crowdmap on behalf of other agencies, this can become a significant challenge – especially when volunteers are physically separated from what is happening on the ground. When the UN OCHA’s Information Services Section formally activated the Standby Taskforce on the 1st of March this year in service of the Libya Crisis Map8, the Verification Team whose job it was to “triangulate” reports from the media and SMS teams, faced similar challenges.
Crisis Mapping is composed four key components: information collection, visualization, analysis and response. This explains why the Task Force takes a modular approach comprising the following Teams, which can be activated in combination or individually: • Technology Team – Responsible for all technical tasks related to Crisis Mapping such as launching crisis mapping platforms and integrating existing SMS and RSS feeds. • Media Monitoring Team -Monitors online media for relevant reports. • SMS Team -Monitors incoming SMS from already existing feed. • Verification Team – Triangulates reports from the Media and SMS Teams. • Translation Team – Translates Media and SMS reports from/to English. • Geo-Location Team – Finds GPS coordinates for Media and SMS reports. • Analysis Team -Provides summary reports based on the incoming data. • Humanitarian Team – Comprises existing professional humanitarians who liaise between the Task Force and humanitarian organizations. http://blog.standbytaskforce.com/about/introducin g-the-standby-task-force/
According to Standby Taskforce verification team member, Jessica Heinzelman, one of the obvious rules was that “if BBC said the report was unverified, then we tagged the report as unverified”. When it came to verifying reports from social media sources, the team looked at possible motivations of those writing reports, checking to see whether they were journalists, organisations or unassociated individuals, for example. Nearanonymous individuals who lacked identity cues were at a disadvantage in this system. “Our verification was based on reputation of reporting organizations and associated individuals (marginalizing the voices of the unknown crowd).”
In addition, Heinzelman reported that a staff member of UN OCHA commented to a SBTF Verification Team Coordinator that the verification status of reports did not significantly influence the way in which they used the information. She also said that they had problems dealing with the sheer volume of reports and that they decided that their goal was to “sort the “good” information from the bad and unverifiable information, not to confirm the veracity of every report”. Heinzelman is advocating for the team to try to focus efforts on what information is most critical for relief efforts because it was proving impossible to verify everything. In the SBTF’s review of the Liberian crisis map, volunteers noted that the Verification Team “should provide the organisation activating the SBTF with a list of survey questions in order to better understand the organization’s verification requirements”. This includes topical priorities for focusing limited resources as well as defining criteria for trust and standards for the level of scrutiny required.
See the report linked from http://blog.standbytaskforce.com/libya-crisis-map-report/ for the Standby Task Force’s after action review of the Libya Crisis Map 5
3. How others “do” verification I now turn to contrasting Ushahidi deployers’ verification efforts to those undertaken by traditional media organisations, Wikipedia and others who are in the business of presenting knowledge from the crowd online. 1. Seeing verified and unverified sources as part of a constantly evolving story When a big news story hits and sources are hard to come by, the bar for quality information is much lower because there is a large demand for information and usually little supply. This is when we often hear journalists talking about “as yet unverified accounts” from informants, and where more recently we see television media showing low quality visuals from people on the scene who might have taken video or pictures on their mobile phones. It is at this stage of reporting of an event that misinformation is most likely to spread. In the first hours after the recent Zanzibar ferry disaster, for example, a picture purporting to be from the scene circulated through online networks before it was found that this was actually a photo from a Filipino boat accident years earlier. While news agencies generally have processes in place to find and verify accurate information, people watching an event on social media will tend to more easily spread inaccurate information. In this case, the Image: One of the many false images purporting to be the ferry misinformation is relatively that sank off the coast of Zanzibar (later identified as a Filipino vessel that sank some time ago) harmless, but in the context of http://storyful.com/stories/1000007737 a potentially violent situation like that experienced in Kenya after the 2008 election, misinformation can prove harmful, sometimes even deadly. For the media, an “unverified” account is a way of signalling that, although the journalist/s believes that the account is either accurate or believable enough to publish (that is, there is still a barrier to entry), the situation is such that their usual verification procedures are unavailable to them. “Unverified” in this sense means that as more information becomes available, the “unverified” reports will be replaced with “verified” reports. There is a sense of the development of a story rather than proving the accuracy of reports in a single moment in time. In the case of Ushahidi, reports that have the “unverified” tag may be unverified because they are inaccurate or because they are still in the verification queue. And
reports are not returned to by deployers as time progresses and more information becomes available. Verification is a once-off process in the Ushahidi case. Reports need to be verified or left unverified in a single process by administrators going through a list of reports and checking each one for accuracy. Contrast this to the journalist and the media organisation that backs her up with verification policies and procedures. The journalist starts off by selecting the most promising leads, investigating each, sometimes coming back to the initial list to provide another source to verify what they’re hearing, sometimes starting off on an entirely different tack. As the story evolves, they go back to old reports that they might have published, sometimes following up with apologies if they reported inaccurately, providing evidence refuting earlier reports or providing evidence in support of the earlier stories. One way of thinking about the working process of verification within a media organisation versus currently on Ushahidi is to think of a detective versus an administrator. A detective might start her work with one or two “clues” (pieces of information that hint at what might have happened). They go out into the world to investigate those clues, following up on some, discarding others. This kind of work process is in contrast to an administrator who will start work each day in her office faced by a huge pile of documents, going through each one individually and stamping it with a binary “yes” or “no” before moving to the next page in the pile. The more that Ushahidi deployers act as information verifiers, the more they editorialise the map and introduce a specific perspective9. This is not necessarily a bad thing: no network is completely open, and introducing a barrier may be a necessary evil towards developing useful information. Seen this way, the deployer becomes a critical part of the trustworthiness of the platform and Ushahidi’s role as providing flags signalling stages of the verification process so that the user can make the decision about how trustworthy the information is. 2. Flagging which information needs verification by the crowd or evaluating whether information is verified using statistical significance Another solution to verification might be providing ways in which the crowd might assist in the verification process by the system asking to (or flagging) whether information being provided needs to be verified (or checked). Here, the onus is not on the deploying organisation to give the final word on whether something is verifiable or not. The NextDrop project (using to track water availability in India) does this by sending random people in a particular locality in India a message asking whether municipal water is available or not. Individuals reply yes or no and the information becomes verified when a statistically significant sample is reached. The benefit of this type of solution is NextDrop is making use of people on the ground who have the authority to verify information in a way that would be impossible (or at least incredibly onerous) for one small organisation to do centrally.
In Egypt, there were two competing Ushahidi installations with very different political perspectives and constituencies in relation to the recent popular uprising. 7
The NextDrop team says that they are still tweaking the solution. According to COO, Ari Olmos, “We haven't had great results lately when doing this in an automated fashion, especially in the early morning hours.” NextDrop continues to iterate as they learn more about patterns, incentives and motivations of the community in which they are operating. A new Wikipedia project is looking at a similar solution to the verification of sources, where sources are flagged as needing to be checked or verified by others to ensure that they are being accurately represented. 4. Designing for verification Four key problems emerged from this initial research that indicate a need for new ways of thinking about verification. 1. Are reports “unverified” just because they haven’t been processed yet? To the public, there is no visible difference between reports that have been tagged as “unverified” because they are inaccurate as opposed to reports that are still in the verification queue. A report that hasn’t been processed should be dealt with very differently from one that has actively been processed as inaccurate, and the interface should reflect this. Using the binary categories of “verified” or “unverified”, without the ability to turn off this functionality, may not always be meaningful or helpful. In addition, there is the problem that deployers sometimes choose to delete misinformation because there is no tag for signalling something that is obviously false. Unverified information could be useful to discover the motivations and tactics of stakeholders and where the cases are less clear, could provide the end user with an opportunity to make their own decisions about the accuracy of the report. 2. The huge volume of reports makes verification difficult – often impossible – to do for every report. Working to verify hundreds of reports often with volunteer support only is not always possible or even preferable. Depending on the requirements of the stakeholders and the actions (if any) that will be taken on the reports by partner agencies, the design of the interface needs to take into account the working environment that many Ushahidi/Crowdmap deployments are facing. In addition, we need to start thinking of alternative mechanisms for verification help by the crowd where we either flag reports for needing more information or where we might contact Ushahidi reporters on the ground asking to verify reports and then take a statistically relevant sample to arrive at an estimate of the information’s trustworthiness. Such solutions need to be investigated fully before implementation because a simple tagging system might be preferable depending on development and user training costs related to return in a complex system like this. 3. What does “verified” mean? How much due diligence has been done? All Ushahidi and Crowdmap deployments use the “verified” and “unverified” tags but there seems to be little consensus about what “verified” means in these cases, and how users understand the verification tag. Here we might aim for a shared
understanding between deployers and users rather than to assert a single definition. In the same way that some newspapers more readily publish conjecture and rumour while others have higher barrier to credibility, so too will Ushahidi and Crowdmap deployers differ in their levels of due diligence. 4. What are the variables necessary for understanding how trustworthy something is in relation to an event/issue? As reports grow in number and marking each as verified or unverified in a linear fashion becomes impossible, it might be more useful to be able to look at different reports according to variables that may correlate with accuracy. Lenses for looking at reports according to location, time and popularity offer some ways of being able to pick out the most important information about a subject but don’t offer a single recipe for getting the most accurate picture. In this sense, lenses offer a similar function of a “verified” tag, but they put the onus of verification on the user of the information, rather than the deploying organisation. The user makes up her own mind about which information to accept and which to discard according to the variables that they set for accuracy. Time and location are two key variables (or lenses) in deciding whether a report is accurate or not – especially when information about the source is unavailable. Making information about the time and location in relation to an event available and enabling users and deployers to filter according to these variables would go a long way to focusing on reports from people close to an event. It is here that citizen reporting proves most useful and where traditional media often have large gaps. Right now, Ushahidi focuses visualisation on a map with static points, but other visualisations (such as the Matrix, or adding features to the timeline including being able to search reports according to a time period and visualising them on the timeline, or the map) will become essential to realising this goal 5. Ideas for further research 1. Tracking how many deployments are using verification tags, the percentage of reports that remain unverified, the numbers of reports that are deleted and how deployments are verifying information. 2. Understanding how users and collaborating partners (e.g. relief organisations) understand “verified” tags. Is this helpful for users and collaborating partners? How does a “verified” tag on a report change the action taken in different contexts? 3. How have other organisations “done” verification and how can we start to build that capacity into Ushahidi? The human rights field already have systems in place to ensure that reporters’ information stays secure and that the system receives trusted reports? How might Ushahidi build in these systems to enable better trust and security?
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.