You are on page 1of 3

E-Discovery Insights – Clearwell Systems, Inc.

Reinventing Review in Electronic Discovery


BY VENKAT RANGAN ON DECEMBER 28TH, 2010

In a recent workshop that I attended, I had the privilege of sharing thoughts


on the latest electronic discovery trends with other experts in the market.
Especially interesting to me was discussing the provocatively titled
paper, The Demise of Linear Review by Bennett Borden of Williams Mullen.
The paper, citing several factual data from various studies, as well as
drawing parallel to other similar anachronisms of the past, makes excellent
arguments for rethinking how legal review is performed in e-discovery.

When linear review is mentioned, the first mental picture one conjures up is
boredom. It has generally been associated with a mental state that is a
result of repetitive and monotonous tasks, with very little variation. To get a sense for how bad this can affect
performance, one only needs to draw upon several studies of boredom at the workplace, especially in jobs such as
mechanical assembly of the 1920s and the telephone switchboard operators of the 1950s. In fact, the Pentagon
sponsored study, Implications for the design of jobs with variable requirements, from Navy Personnel Research and
Development Center, presents an excellent treatise on contributors for workplace fatigue, stress, monotony, and
distorted perception of time. This is best illustrated in their paper:

Mechanical assembly, inspection and monitoring, and continuous manual control are the principal kinds of tasks most
frequently studied by researchers investigating the relationship between performance and presumed boredom. On
the most repetitive tasks, degradation of performance has typically been found within 30 minutes (Fox & Embry,
1975; Saito, Kishida, Endo, & Saito, 1972). The early studies of the British Industrial Fatigue Board (Wyatt & Fraser,
1929) concluded that the worker’s experience of boredom could be identified by a characteristic output curve on
mechanical assembly jobs. The magnitude of boredom was inversely related to output and was usually marked by a
sharp decrement in the middle of a work period.

How does this apply to linear review? Well, a linear review is most often performed using a review application or tool,
simulating a person reading and classifying a pile of documents. Know more on review ediscovery .The reviewer is
asked to read the document and apply a review code, based on their judgment. While it appears easy, it can be one
of the most stressful, boring, and thankless jobs for a well-educated, well-trained knowledge worker. Even with
technology and software advances a reviewer is required to read documents in relatively constrained workflows. Just
scrolling through pages and pages of a document, comprehending its meaning and intent in the context of the
production request can make it stressful. To add to this, reviewers are often measured for their productivity based on
the number of documents or pages they review per day or per hour. In cases where large number of reviewers are
involved, there are very direct comparisons of rates of review. Finally, the review effort is judged for quality without
consideration for the very elements that impact quality. Imagine a workplace task where every action taken by a
knowledge worker is monitored and evaluated to the minutest detail.

Given this, it is no wonder that study after study has found a straight plough-through linear review produces less than
desirable results. A useful way to measure effectiveness of a review exercise is to submit the same collection of
documents to multiple reviewers and assess their level of agreement on their classification of the reviewed
documents in specific categories. One such study, Document Categorization in Legal Electronic Discovery: Computer
Classification vs. Manual Review, finds that the level of agreement among human reviewers was only in the 70%
range, even when agreement is limited to positive determination. As noted in the study, previous TREC inter-

To read more visit www.clearwellsystems.com/e-discovery-blog/ 1


E-Discovery Insights – Clearwell Systems, Inc.

assessor agreement notes as well as other studies on this subject by Barnett et al., 2009 also shows a similar and
consistent result. Especially noteworthy from TREC is the fact that only 9 out of 40 topics studied had an agreement
level higher than 70%, while remarkably, four topics had no agreement at all. Some of the disagreement is due to the
fact that most documents fall on varying levels of responsiveness which cannot easily be judged on binary yes/no
decision (i.e., the “where do you draw the relevance line” problem). However, a significant source on variability is
simply attributed to the boredom and fatigue that comes with repetitiveness of the task.

A further observation on reviewer effectiveness is available from the TREC 2009 Overview Report, which studied the
appeals and adjudication process of that year’s Interactive Task. This study offers an excellent opportunity to assess
the effectiveness of initial review and subsequent appeals and adjudication process. As noted in the study, the
Interactive Task involves an initial run submission from participating teams which are sampled and reviewed by
human assessors. Upon receiving their initial assessments, participating teams are allowed to appeal those
judgments. Given the teams’ incentive to improve upon the initial results, they are motivated to construct an appeal
for as many documents as they can, with each appeal containing a justification for re-classification. As noted in the
study, the success rates of appeals were very high, with 84% to 97% of initial assessments being reversed. Such
reversals were across the board and directly proportional to the number of appeals, suggesting that even the
assessments that were not appealed could be suspect. Another aspect that is evidenced is that the appeals process
requires a convincing justification from the appealing team, in the form of a snippet of the document, document
summary, or a portion of the document highlighted for adjudication. This in itself biases the review and makes it
easier for the topic assessor to get a clearer sense for the document on their attempt at adjudicating the appeal. This
fact is also borne out by the aforementionedComputer Classification vs. Manual Review study where the senior
litigator with the knowledge of the matter had the ability to offer the best adjudications.

Given that linear review is flawed, what are the remedies? As noted in Bennett’s paper, intelligent use of newer
technologies along with a review workflow that leverages them can offer gains that are demonstrated in other
industries. Let’s examine a few of them.

Response Variation

Response variation is a strategy for coping with boredom by attempting to build variety into the task itself. In
mechanical assembly lines, response variation is added through innovative floor and task layouts, such as Cellular
Layout. On some tasks, response variation may involve only simple alternation behaviors, such as reversing the
order in which subtasks are performed; on others, the variety may take more subtle forms reflected in an
inconsistency of response times. In the context of linear review, it can help to organize your review batches so that
your review teams alternate classifying documents for responsiveness, privilege and confidential etc. Another
interesting approach would be to mix the review documents but suggest that each be reviewed for a specific target
classification.

Free-Form Exploration

Combining aspects of early case assessments and linear review is one form of exploration that is known to offer both
a satisfying experience and effective results. While performing linear review, the ability to suspend the document
being reviewed and jump to other similar documents and topics gives the reviewer a cognitive stimulus that improves
knowledge acquisition. Doing so offers an opportunity for the reviewer to learn facts of the case that would normally
be difficult to obtain, and approach the knowledge levels of a senior litigator of the case. After all, we depend on the
knowledge of the matter to be a guide for reviewers, so attempts to increase their knowledge of the case can only be

To read more visit www.clearwellsystems.com/e-discovery-blog/ 2


E-Discovery Insights – Clearwell Systems, Inc.

helpful. Also, on a free-form exploration, a reviewer may stumble on an otherwise difficult to obtain case fact and the
sheer joy of finding something valuable would be rewarding.

Expanding the Work Product

Besides simply judging the review disposition of a document, the generation of higher value output such as document
summaries, critical snippets, and document meta-data that contribute to the assessment can both reduce the
boredom of the current reviewer as well as contribute valuable insights to other reviewers. As noted earlier, being
able to assist the review with such aids can be immensely helpful in your review process.

Review Technologies

Of course, fundamentally changing linear review with specific technologies that radically changes the review workflow
is an approach worth considering. While offering such aids, it must be remembered that human judgment is still
needed and the process must incorporate both increasing their knowledge as well as their ability to apply judgment.
We will examine these technologies in an upcoming post.

Know More on:

Document review.
Pre litigation discovery.

To read more visit www.clearwellsystems.com/e-discovery-blog/ 3

You might also like