EDRM Publishes TAR Guidelines

On February 7, EDRM released its Technology Assisted Review (TAR) Guidelines. The Guidelines are the first major work product from EDRM, begun not long after Tom Gelbmann and I handed the reins over to John Rabiej and his colleagues at the Bolch Judicial Institute of Duke Law School.

The goal for the Guidelines was simple: Provide a wide audience with an authoritative and clear understanding of the reasons to use technology-assisted review, known as TAR, in litigation, and the processes by which TAR works.

Getting there was a major undertaking. More than 50 volunteer judges, practitioners, and e-discovery experts worked for over two years to prepare the Guidelines. To make the project more manageable, EDRM set up three drafting teams. Leading the teams were Matt Poplawski of Winston & Strawn; Mike Quartararo of eDPM Advisory Services; and Adam Strayer of Paul, Weiss, Rifkind, Wharton & Garrison. Tim Opsitnick of TCDI and U.S. Magistrate Judge James Francis IV (Southern District of New York, Ret.) took on the challenge of editing the Guidelines as well as incorporating public comments. I assisted John Rabiej, deputy director of the Bolch Judicial Institute, and Jim Waldron, director of EDRM, as they directed the overall process.

Any document prepared by this many people – bringing as they do a wide range of experience, perspectives, and opinions – will be a compromise document. The Guidelines are no exception. We had some drama along the way; I doubt we could have avoided that entirely. In the end, however, the team assembled a set of guidelines that should have something for everyone – something to learn, something to like, and, of course, something to take issue with.

It is important to keep in mind that these Guidelines are just that, guidelines. They are not meant to set a floor. They do not represent a threshold anyone attempting to use TAR should be able to cross. At the same time they are not a ceiling either. I encourage everyone deploying TAR capabilities to push themselves farther than the uses discussed in the Guidelines. Go past the workflows discussed in Chapter Two. Push beyond the alternative tasks described in Chapter Three. These are starting points. They are not end points.

The Guidelines are 50 pages long. Don’t be put off by the length. The bulk of the contents is organized into four chapters:

  1. Defining Technology Assisted Review
  2. TAR Workflow
  3. Alternative Tasks for Applying TAR
  4. Factors to Consider When Deciding Whether to Use TAR

You can jump directly to a specific chapter if you like; each stands on its own. I suggest you instead take the time to start at the beginning and work through the document systematically. That, I think, will help you get the greatest value from the Guidelines.

Please send questions and comments to John Rabiej, Jim Waldron and me at [email protected].

Here is the detailed structure of the Guidelines:

Page Section
i Foreword
ii Acknowledgements
iv Preface
1 Chapter One: Defining Technology Assisted Review
A. Introduction
2 B. The TAR Process
1. Assembling the TAR Team
2 2. Collection and Analysis
3. “Training” the Computer Using Software to Predict Relevancy
4. Quality Control and Testing
5. Training Completion and Validation
7 Chapter Two: TAR Workflow
8 A. Introduction
9 B. Foundational Concepts & Understandings
1. Key TAR Terms
2. TAR Software: Algorithms
a) Feature Extraction Algorithms
10 b) Supervised Machine Learning Algorithms (Supervised Learning Methods)
c) Varying Industry Terminology Related to Various Supervised Machine Learning Methods
11 C. The TAR Workflow
1. Identify the Team to Engage in the TAR Workflow
12 2. Select the Service Provider and Software
13 3. Identify, Analyze, and Prepare the TAR Set
14 a) Timing and the TAR Workflow
15 4. The Human Reviewer Prepares for Engaging in TAR
16 5. Human Reviewer Trains the Computer to Detect Relevancy, and the Computer Classifies the TAR Set Documents
19 6. Implement Review Quality Control Measures During Training
a) Decision Log
20 b) Sampling
c) Reports
7. Determine When Computer Training Is Complete and Validate
21 a) Training Completion
(i) Tracking of Sample-Based Effectiveness Estimates
(ii) Observing Sparseness of Relevant Documents Returned by the Computer During Active Learning
22 (iii) Comparison of Predictive Model Behaviors
(iv) Comparing Typical TAR 1.0 and TAR 2.0 Training Completion Processes
24 b) Validation
26 8. Final Identification, Review, and Production of the Predicted Relevant Set
27 9. Workflow Issue Spotting
a) Extremely Low or High Richness of the TAR Set
28 b) Supplemental Collections
c) Changing Scope of Relevancy
29 d) Unreasonable Training Results
30 Chapter Three: Alternative Tasks for Applying TAR
A. Introduction
B. Early Case Assessment/Investigation
31 C. Prioritization for Review
D. Categorization (By Issues, for Confidentiality or Privacy)
32 E. Privilege Review
33 F. Quality Control and Quality Assurance
G. Review of Incoming Productions
34 H. Deposition/Trial Preparation
35 I. Information Governance and Data Disposition
36 1. Records Management Baseline
2. Assessing Legacy Data – Data Disposition Reviews
3. Isolating Sensitive Content – PII/PHI/Medical/Privacy/Confidential/Privileged/Proprietary Data
37 Chapter Four: Factors to Consider When Deciding Whether to Use TAR
A. Introduction
B. Should the Legal Team Use TAR?
1. Are the Documents Appropriate for TAR?
38 2. Are the Costs and Use Reasonable?
39 3. Is the Timing of the Task/Matter Schedule Feasible?
4. Is the Opposing Party Reasonable and Cooperative?
5. Are There Jurisdictional Considerations That Influence the Decision?
40 C. The Cost of TAR vs. Traditional Linear Review
D. The Cost of TAR and Proportionality
41 Appendix: Key Terms
44 Thank You to Our Sponsors!

 

George Socha on Email
George Socha
Senior Vice President of Brand Awareness at Reveal
George Socha is the Senior Vice President of Brand Awareness at Reveal, where he promotes brand awareness, helps guide development of product roadmap and consults with customers on effective deployment of legal technology.

Named an “E-Discovery Trailblazer” by The American Lawyer, George has assisted corporate, law firm, and government clients with all facets of electronic discovery, including information governance, domestically and globally. He served clients in a variety of industries including pharmaceutical, energy, retail, banking and technology, among others. As a renowned industry thought leader, he has authored more than 50 articles and spoken at more than 200 engagements across the world on a variety of e-discovery topics. His extensive knowledge has also been utilized more than 20 times to provide expert testimony.

Co-founder of the Electronic Discovery Reference Model (EDRM), a framework that outlines the standards for the recovery and discovery of digital data, and the Information Governance Reference Model (IGRM), a similar framework specific to information management, George is skilled at developing and implementing electronic discovery strategies and managing electronic discovery processes.