Microsoft outlook app icon. Microsoft OutLook application

Microsoft 365 eDiscovery Practical Resources for Law Firms

Microsoft 365 eDiscovery Practical Resources for Law Firms

During our recent webinar, “How Law Firms Can Support Their Clients Who Use Microsoft 365”, we promised attendees some practical resources and an overview of Microsoft 365 (M365) plans and licensing options that would be useful for law firm personnel.

The recording of the webinar can be accessed here: “How Law Firms Can Support Their Clients Who Use Microsoft 365

M365 Plans and Licenses

M365 offers plans for Enterprise, Government, and Education. The features vary depending upon license structure and organization type. For the purposes of our discussion we will share differences within “E3” vs. “E5” licenses. Please note that licensing plans, availability, and functionality will vary and be modified over time.

Compliance Functions in E3 vs E5

Compliance Functions in E3 vs E5

  • Core eDiscovery has hold, search, and export features. It can be accessed with an E3 license.
  • Advanced eDiscovery adds hold notifications, review, and redaction to the above. It can be accesses with an E5 license or an E3 license with a “buy-up” SKU.

Resources from M365 Compliance Documentation

In most instances, law firm personnel do not have access to  M365 and are entrusted with guiding and advising client personnel with performing various tasks within M365. Microsoft has very detailed documentation for M365 compliance. Access documentation on eDiscovery in M365.

Below are some of the  most common tasks and questions that arise while performing eDiscovery in a given matter. All images below are from Microsoft’s website.

  1. Creating searches: M365 allows you to run content search and displays estimated number of search results in the search statistics. The results can be previewed or exported to a local computer. View the documentation.
    Search Query
  2. Reviewing and downloading search statistics: This is very useful when case teams are trying to get a sense of hit counts;the results can be downloaded to a csv file and shared with counsel. Microsoft limits 20 rows in the keyword list of a search query. – View the documentation.
    Search Statistics
  3. Exporting search results: The search results can be exported as a PST file or individual messages for emails. Copies of native files are exported for OneDrive and SharePoint content. M365 generates a clean log of what is exported – the export includes a Results.csv file that contains information about every item that’s exported and a manifest file (in XML format) that contains information about every search result is also exported. View the documentation
    Export Results
  4. Search limitations within M365 content search documents can be found at https://docs.microsoft.com/en-us/microsoft-365/compliance/limits-for-content-search?view=o365-worldwide
  5. Partially indexed items within M365: Partially indexed items are Exchange mailbox items and documents on SharePoint and OneDrive for Business sites that for some reason weren’t completely indexed for search. Detailed overview can be found at https://docs.microsoft.com/en-us/microsoft-365/compliance/partially-indexed-items-in-content-search?view=o365-worldwide

Combining Processes & Technology to Make eDiscovery a Standard Business Procedure

eDiscovery has become a well-established industry, having been used in litigation for nearly two decades now. And whether we want to admit it or not, nearly every case filed has a potential eDiscovery element. So, what do you do if you’re not a Certified e-Discovery Specialist (CEDS) or if this is all a new language for your organization, lawyers or IT department? Hint: Take a deep breath and start identifying people who can help, processes that can be easily implemented and technology that ties it all together.

The knowledge and methods surrounding eDiscovery have grown tremendously through the years, yet many companies and law firms still treat eDiscovery like a fire drill. Running eDiscovery as a reactionary process leads to disruption of the business, increased risks and higher costs. Plus, when it’s not planned for in advance, eDiscovery will likely be handled differently by different people, for each individual matter. That is, the wheel is continually reinvented – the opposite of efficiency.

The good news is that eDiscovery doesn’t have to be that way. With just a little planning, it can be become a standardized, efficient business process. All you need are the right people, the proper workflows and top-of-the-line technology to support it.

The right people…

Contact a reputable eDiscovery company or consultant or request some guidance from your trusted outside counsel. If you are outside counsel, don’t sweat it… there are lots of resources to tap that will help you accomplish your goals – but remember, don’t agree to any deadlines with the opposing side or the court without having some guidance from someone who has done this before – preferably a few thousand times. The number of custodians (i.e., people or things that hold potentially responsive data), amount of data per custodian, ‘other’ sources where that data may reside, electronically stored information (ESI) protocols and other considerations can all become whammies if you’re ‘faking it till you make it.’ We frequently get on calls to help people navigate the eDiscovery rapids. But, just like whitewater, you wouldn’t embark without your guide – and the proper raft.

…combined with the right processes

By creating a detailed, written plan that outlines the steps to take when facing litigation, attorneys can ensure that they follow a consistent procedure for every case. An orderly plan will significantly reduce the stress and chaos associated with the ad hoc approach to eDiscovery. It will also designate specific employees to oversee the process, which will provide clear leadership and consistency for each matter.

Attorney document review is traditionally the most expensive and time-consuming element of eDiscovery, and it can be compounded by how the review is handled. When there is a large amount to review and/or the deadline is coming up fast, it’s common practice to use contract attorneys to assist with the review. However, many times, this decision is made too late.

One of the best ways to increase the efficiency and accuracy and decrease the cost of the attorney document review process is to develop a review standard across all legal matters. Determine what level of case or what number of documents will warrant additional help, and then have a designated team of the same review attorneys – either within your company or outsourced – handle this process on a regular basis.

Doing this will exponentially reduce the time and money it normally takes to find contract attorneys, perform conflict checks, train them on the case and get the review going. In a rush situation, the costs can be compounded. Using the same team of review attorneys also builds institutional knowledge for clients who are involved in frequent litigation, which further improves accuracy and efficiency.

Another way to streamline processes is to recycle processed data and both technical and attorney work product. When the same files and custodians are involved in multiple legal matters, the traditional approach has been to perform separate collections for each case and have different reviews of the same material. Instead, develop a data repository that allows you to reuse relevant work product across different matters – saving a great deal of time and money and simultaneously making the eDiscovery process smoother and more accurate.

…and the right technology tools

If you’re past the point of “ESI for beginners” and are looking to streamline the eDiscovery process, consider using applications that manage legal holds, data preservation, collections, processing and searching – all in one place. Having a single platform to handle these tasks will help automate processes, increase efficiency and cut costs while ensuring you meet necessary compliance obligations. Look for a solution that allows you to easily move your collected and culled data into the review platform of your choice, ensuring you remain in control of your data.

Technology assisted review (TAR) and advanced data analytics tools can be great assets in the eDiscovery process, as they help to dramatically reduce the time and effort required to review data. It is important, however, to understand when TAR may not be the best solution. Using TAR on some types of data, such as spreadsheets, can actually make the review process more difficult and less accurate. By understanding what types of data you’ll typically be handling – whether it’s documents and emails or primarily spreadsheets – you can decide when it makes sense to use TAR and when it doesn’t.

By combining the right tech tools with expert workflows, and backing it all with experienced personnel, eDiscovery can become a regular practice of business. Transforming the chaotic, spur-of-the-moment approach to a consistent, efficient method will result in less confusion and more order for each legal matter.

Who’s Afraid of Project Management Tools?

As part of the Summer School Webinar Series, eDPM Advisory Services recently teamed up with ACEDS to review the project management landscape. Our objective: to categorize and identify some of the project management tools on the market that may be used to help automate e-discovery processes. Although we identified several potential tools, the central message is that organizations need to first identify their process and then seek out a tool that may automate those processes. Two interesting data points arose based on poll questions asked during the presentation.

First, we asked how many in the audience had developed written processes for each stage of an e-discovery project. The results of the poll question look like this:

The results demonstrate that roughly 65% of attendees had some formal written process across the e-discovery spectrum. This is good news because I think if this question had been asked 3 or 5 years ago the numbers would have been much lower. Interestingly, the stage in which formal processes were least prevalent (at 41%) was Identification and Preservation. This is troubling only because Identification and Preservation in the stage in which we identify the custodians and data sources, implement a litigation hold, and ensure we have preserved ESI in a defensible manner.

Second, we asked attendees what if any tools they were using to manage their e-discovery projects. The results of that question look like this:

The good news is that the overwhelming majority are using some form of project management tool. It remains troubling, however, that spreadsheets appear to remain the tool of choice for the majority of people. This could reflect a lack of awareness of the other available tools or an indication that the other tools are not meeting the market need, or perhaps that the tools are not affordable or difficult to use. But the thing that is perhaps even more troubling is that over 5% of attendees indicate that they are using nothing at all.

One goal of the recent webinar was to introduce attendees to some of the project management tools that are available. In an effort to build on these results and get a sample from a wider audience, we are going to begin polling the e-discovery industry to gain more insight. We are going to share these results once we have them.

Please click here to take the poll

57 Ways to Leave Your (Linear) Lover – A Case Study on Using Insight Predict to Find Relevant Documents Without SME Training

A Big Four accounting firm with offices in Tokyo recently asked Catalyst to demonstrate the effectiveness of Insight Predict, technology assisted review (TAR) based on continuous active learning (CAL), on a Japanese language investigation. They gave us a test population of about 5,000 documents which had already been tagged for relevance. In fact, they only found 55 relevant documents during their linear review.

We offered to run a free simulation designed to show how quickly Predict would have found those same relevant documents. The simulation would be blind (Predict would not know how the documents were tagged until it presented its ranked list.) That way we could simulate an actual Predict review using CAL.

We structured a simulated Predict review to be as realistic as possible, looking at the investigation from every conceivable angle. The results were outstanding; we couldn’t believe what we saw. So, we ran it again, using a different starting seed. And again. And again. In fact, we did 57 different simulations starting with relevant seeds (singularly with each relevant document). A non-relevant seed. And a synthetic seed.

Regardless of the starting point, Predict was able to locate 100% of the relevant documents after reviewing only a fraction of the collection. You won’t believe your eyes either.

Complicating Factors

Everything about this investigation would normally be challenging for a TAR project.

To begin with, the entire collection was in Japanese. Like other Asian languages, Japanese documents require special attention for proper indexing, which is the first step in feature extraction for a technology assisted review. At Catalyst, we incorporate semantic tokenization of the CJK languages directly into our indexing and feature extraction process. The value of that approach for a TAR project cannot be overstated.

To complicate matters further, the collection itself was relatively small, and sparse. There were only 4,662 coded documents in the collection and, of those, only 55 total documents were considered responsive to the investigation. That puts overall richness at only 1.2%.

The following example illustrates why richness and collection size together compound the difficulty of a project. Imagine a collection of 100,000 documents that is 10% rich. That means that there are 10,000 responsive documents. That’s a large enough set that a machine learning-based TAR engine will likely do a good job finding most of those 10,000 documents.

Next, imagine another collection of one million documents that is 1% rich.  That means that there are also 10,000 responsive documents. That is still a sizeable enough set of responsive documents to be able to train and use TAR machinery, even though richness is only 1%.

Now, however, imagine a collection of only 100 documents that is 1% rich. That means that only 1 document is responsive. Which means that either you’ve found it, or you haven’t. There are no other responsive documents other than that document itself, so there are no other documents that, through training of a machine learning algorithm, can lead you to the one responsive document. So a 1% rich million document collection is a very different creature than a 1% rich 100 document collection. These are extreme examples, but they illustrate the point that small collections are difficult and low richness collections are difficult, but small, low richness collections are extremely difficult.

Small collections like these are nearly impossible for traditional TAR systems because it is difficult to find seed documents for training. In contrast, Predict can start the training with the very first coded document. This means that Predict can quickly locate and prioritize responsive documents for review, even in small document sets with low richness.

Compounding these constraints, nearly 20% (10 out of 55) of the responsive documents were hard copy Japanese documents that had to be OCR’d. As a general matter, it can be somewhat difficult to effectively OCR Japanese script because of the size of the character set, the complexity of individual characters, and the similarities between the Kanji character structures. Poor OCR will impair feature extraction which will, in turn, diminish the value of a document for training purposes, making it much more difficult to find responsive documents, let alone find them all.

Simulation Protocol

To test Predict, we implemented a fairly standard simulation protocol—one that we used for NIST’s TREC program and often use to let prospective clients see how well Predict might work on their own projects. After making the text of the documents available to be ingested into Predict, we simulate a Predict prioritized review using the existing coding judgments in a just in time manner, and we prepare a gain curve to show how quickly responsive documents are located.

Since this collection was already loaded into our discovery platform, Insight Discovery, we had everything we needed to get the simulation underway: document identification numbers (Bates numbers); extracted text and images for the OCR’d documents; and responsiveness judgments. Otherwise, the client simply could have provided that same information in a load file.

With the data loaded, we simulated different Predict reviews of the entire collection to see how quickly responsive documents would be located using different starting seeds. To be sure, we didn‘t need to do this just to convince the client that Predict is effective; we wanted to do our own little scientific experimentation as well.

Here is how the simulation worked:

  1. In each experiment, we began by choosing a single seed document to initiate the Predict ranking, to which we applied the client’s responsiveness judgment. We then ranked the documents based on that single seed.[1]
  2. Once the initial ranking was complete, we selected the top twenty documents for coding in ranked order (with their actual relevance judgments hidden from Predict).[2]
  3. We next applied the proper responsiveness judgments to those twenty documents to simulate the review of a batch of documents, and then we submitted all of those coded documents to initiate another Predict ranking.

We continued this process until we had found all the responsive documents in the course of each review.

First Simulation

We used a relevant document to start the CAL process for our first simulation. In this case, we selected a relevant document randomly to be used as a starting seed. We then let Predict rank the remaining documents based on the initial seed and present the 20 highest-ranked documents for review. We gave Predict the tagged values (relevant or not) for these documents and ran a second ranking (now based on 21 seeds). We continued the process until we ran out of documents.

Figure 1

As is our practice, we used a gain curve to uniformly evaluate the results of the simulated reviews. A gain curve is helpful because it allows you to easily visualize the effectiveness of every review. On the horizontal x-axis, we plot the number of documents reviewed at every point in the simulation. On the vertical y-axis, we plot the number of documents coded as responsive at each of those points. The faster the gain curve rises, the better, because that means you are finding more responsive documents more quickly, and with less review effort.

The linear line across the diagonal shows how a linear review would work, with the review team finding 50% of the relevant documents after reviewing 50% of the total document population and 100% after reviewing 100% of the total.

The red line in Figure 1 shows the results of the first simulation, using the single initial random seed as a starting point (compared to the black line, representing linear review). Predict quickly prioritized 33 responsive documents, achieving a 60% recall upon review of only 92 documents.

While Predict efficiency diminished somewhat as the responsive population was depleted, and the relative proportion of OCR documents was increasing, Predict was able to prioritize fully 100% of the responsive documents within the first 1,491 documents reviewed (32% of the entire collection). That represents a savings of 68% of the time and effort that would have been required for a linear review.

Second Test

The results from the first random seed looked so good that we decided to try a second random seed, to make sure it wasn’t pure happenstance. Those results were just as good.

Figure 2

In Figure 2, the gray line reflects the results of the second simulation, starting with the second random seed. The Predict results were virtually indistinguishable through 55% recall, but were slightly less efficient at 60% recall (requiring the review of 168 documents). The overall Predict efficiency recovered almost completely, however, prioritizing 100% of the responsive documents within the first 1,507 documents (32.3%) reviewed in the collection—a savings again of nearly 68% compared with linear review.

Third Simulation

The results from the first and second runs were so good that we decided to continue experimenting. In the next round we wanted to see what would happen if we used a  lower-ranked (more difficult for the algorithm to find) seed to start the process. To accomplish that, we chose the lowest-ranked relevant document found by Predict in the first two rounds as a starting seed. This turned out to be an OCR’d document (which was likely the most unique responsive document) to initiate the ranking. To our surprise, Predict was just about as effective starting with this lowly-ranked seed as it had been before. Take a look and see for yourself.[3]

Figure 3

The yellow line in Figure 3 shows what happened when we started with the last document located during the first two simulations. The impact of starting with a document that, while responsive, differs significantly from most other responsive documents is obvious. After reviewing the first 72 documents prioritized by Predict, only one responsive document had been found. However, the ability of Predict to quickly recover efficiency when pockets of responsive documents are found is obvious as well. Recall reached 60% upon review of just 179 documents — only slightly more than what was required in the second simulation. And then the Predict efficiency surpassed both previous simulations, achieving 100% recall upon review of only 1,333 documents—28.6% of the collection, and a savings of 71.4% against a linear review.

Fourth Round

We couldn’t stop here. For the next round, we decided to use a random non-responsive document as the starting point. To our surprise, the results were just as good as the earlier rounds. Figure 4 illustrates these results.

Figure 4

Fifth Round

We decided to make one more simulation run just to see what happened. For this final starting point, we created a synthetic responsive Japanese document. We composited five responsive documents selected at random into a single synthetic seed, started there, and achieved much the same results.[4]

Figure 5

Sixth through 56th Rounds

The consistency of these five results seemed really interesting so for the heck of it we ran simulations using every single responsive document in the collection as a starting point. So, although it wasn’t our plan at the outset, we ultimately simulated 57 Predict reviews across the collection, each from a different starting point (all 55 relevant documents, one non-relevant document, and one synthetic seed).

You can see for yourself from Figure 6 that the results from every simulated starting point were, for the most part, pretty consistent. Regardless of the starting point, once Predict was able to locate a pocket of responsive documents, the gain curve jumped almost straight up until about 60% of the responsive documents had been located.

Gordon Cormack once analogized this ability of a continuous active learning tool to a bloodhound—all you need to do is give Predict the “scent” of a responsive document, and it tracks them down. And in every case, Predict was able to find every one of the responsive documents without having to review even one-third of the collection.

Here is a graph showing the results for all of our simulations:

Figure 6

And here are the specifics of each simulation at recall levels of 60%, 80% and 100% recall.

DocID Percentage of Collection Reviewed to Achieve Recall Levels
60% 80% 100%
27096 4% 15% 29%
34000 2% 11% 32%
35004 4% 12% 32%
83204 3% 11% 32%
86395 4% 14% 32%
93664 2% 13% 32%
98263 3% 11% 29%
98391 2% 13% 32%
98945 3% 11% 32%
99708 4% 12% 32%
99773 2% 10% 32%
99812 2% 11% 32%
99883 2% 12% 32%
99918 5% 14% 32%
100443 4% 12% 32%
100876 3% 13% 32%
101211 4% 12% 32%
101705 3% 14% 31%
101829 3% 11% 31%
102395 3% 13% 32%
102432 4% 14% 32%
102499 2% 9% 32%
102705 3% 14% 32%
103803 4% 12% 32%
105017 2% 14% 32%
105799 3% 13% 32%
106993 2% 12% 30%
107315 2% 14% 32%
109883 4% 12% 32%
110350 3% 15% 30%
112905 4% 14% 32%
117037 4% 12% 32%
118353 4% 14% 32%
119216 4% 15% 32%
119258 2% 12% 32%
119362 2% 10% 32%
121859 3% 11% 32%
122000 4% 15% 29%
122380 5% 11% 30%
123626 3% 10% 32%
123887 3% 11% 32%
124517 3% 14% 32%
125901 3% 14% 32%
130558 2% 14% 32%
131255 4% 10% 32%
132604 2% 10% 32%
136819 3% 14% 29%
140265 4% 13% 32%
140543 4% 12% 32%
147820 3% 14% 32%
154413 4% 13% 32%
238202 4% 12% 32%
242068 4% 12% 32%
245309 4% 16% 32%
248571 4% 12% 32%
NR 3% 14% 32%
SS 2% 13% 31%
Min 2% 9% 29%
Max 5% 16% 32%
Avg 3% 13% 32%

Table 1

As you can see, the overall results mirrored our earlier experiments, which makes a powerful statement about the ease of using a CAL process. Special search techniques and different training starts seemed to make very little difference in these experiments. We saw this through our TREC 2016 experiments as well. We tested different, and minimalist, methods of starting the seeding process (e.g. one quick search, limited searching), and found little difference in the results. See our report and study here.

What did we learn from the simulations?

One of the primary benefits of a simulation as opposed to running CAL on a live matter is that you can pretty much vary and control every aspect of your review to see how the system and results change when the parameters of the review change. In this case, we varied the starting point, but kept every other aspect of the simulated review constant. That way, we could compare multiple simulations against each other and determine where there may be differences, and whether one approach is better than any other.

The important takeaway is the fact that the review order of these various experiments is exactly the same review order that the client would achieve, had they reviewed these documents in Predict, at a standard review rate of about one document per minute, and made the exact same responsiveness decisions on the same documents.

Averaged across all the experiments we did, Predict was able to find just over half of all responsive documents (50% recall) after reviewing only 89 documents (1.9% of the collection; 98.1% savings). Predict achieved 75% recall after reviewing only 534 documents (11.5% of the collection; 88.5% savings).  And finally, Predict achieved an otherwise unheard of complete 100% recall on this collection after reviewing only 1,450 documents (31.1% of the collection; 68.9% savings).

Furthermore, Predict is robust to differences in initial starting conditions. Some starting conditions are slightly better than others. In one case, we achieved 50% recall after only 65 documents (1.4% of the collection; 98.6% savings) whereas in another it took 163 documents to reach  50% recall (3.5% of the collection; 96.5% savings). However, the latter example achieved 100% recall after only 1,352 documents (29% of the collection; 71% savings), whereas the earlier example achieved 100% recall after 1,507 documents (32.3% of the collection; 67.7% savings).

Overall, the key is not to focus on minute differences, because all these results are within a relatively narrow performance range and follow the same general trend.

Other key takeaways:

  1. Predict’s implementation of CAL works extremely well on low richness collections. Starting with only 55 relevant documents out of nearly 5,000 typically makes finding the next relevant document difficult, but Predict excelled with a low richness collection.
  2. This case involved OCR’d documents. Some people have suggested that TAR might not work well with OCR’d text but that has not been our experience. Predict worked well with this population.
  3. All documents were in Japanese. We have written about our success in ranking non-English documents but some have expressed doubt. This study again illustrates the effectiveness of Predict’s analytical tools when the documents are properly tokenized.

These experiments show that there are real, significant savings to using Predict, no matter the size, richness or language of the document collection.

Conclusion

Paul Simon, that great legal technologist, knew long ago that it was time to put an end to keywords and linear review:

The problem is all inside your head, she said to me.
The answer is easy if you take it logically.
I’d like to help you as we become keyword free.
There must be fifty-seven ways to leave your (linear) lover.

She said it’s really not my habit to intrude.
But this wasteful spending means your clients are getting screwed.
So I repeat myself, at the risk of being cruel.
There must be fifty-seven ways to leave your linear lover,
Fifty-seven ways to leave your (linear) lover.

Just slip out the back, Peck, make a new plan, Ralph.
Don’t need to be coy, Gord, just listen to me.
Hop on the bus, Craig, don’t need to discuss much.
Just drop the keywords, Mary, and get yourself (linear) free.

She said it grieves me so to see you in such pain.
When you drop those keywords I know you’ll smile again.
I said, linear review is as expensive as can be.
There must be fifty-seven ways ways to leave your (linear) lover.

Just slip out the back, Shira, make a new plan, Gord.
Don’t need to be coy, Joy, just listen to me.
Hop on the bus, Tom, don’t need to discuss much.
Just drop the keywords, Gayle, and get yourself (linear) free.

She said, why don’t we both just sleep on it tonight.
And I believe, in the morning you’ll begin to see the light.
When the review team sent their bill I realized she probably was right.
There must be fifty-seven ways to leave your (linear) lover.
Fifty-seven ways to leave your (linear) lover.

Just slip out the back, Maura, make a new plan, Fatch.
Don’t need to be coy, Andrew, just listen to me.
Hop on the bus, Michael, don’t need to discuss much.
Just drop off the keywords, Herb, and get yourself (linear) free.

 

 

[1] We chose to initiate the ranking using a single document simply to see how well Predict would perform in this investigation from the absolute minimum starting point. In reality, a Predict simulation can use as many responsive and non-responsive documents as desired. In most cases, we use the same starting point (i.e., the exact same documents and judgments) used by the client to initiate the original review that is being simulated.

[2] We chose to review twenty documents at a time because that is what we typically recommend for batch sizes in an investigation, to take maximum advantage of the ability of Predict to re-rank several times an hour.

[3] It is interesting to note that Predict did not find relevant documents as quickly using a non-relevant starting seed, which isn’t surprising. However, it caught up with the earlier simulation by the 70% mark and proved just as effective.

[4] Compositing the text of five responsive documents into one is a reasonable experiment to run. But it’s not what most people think of when they think synthetic seed. They imagine some lawyer crafting verbiage him- or herself, writing something up about what they expect to find, in their own words. And then using that document to start the training. Using the literal text of five documents already deemed to be responsive is not the same thing but it made for an interesting experiment.

Seven Reasons to Create a Culture of Ongoing Learning with e-Discovery Technology Tools

Litigation teams often invest a significant amount of time, energy and money to identify the ideal e-discovery software product or related technology solution to meet their needs. They make a final decision, select the tool and begin the implementation, eager to realize a return on their investment.

Yet, far too often, there is a crucial final element that is overlooked: the importance of a sustained strategy for maximizing effective use of the tools by their end users. Unfortunately, unless your staff is well-trained on using that new software platform and your organization commits to a culture of ongoing learning when it comes to the use of technology tools, you simply can’t expect to reap the full benefits of your investment.

As professionals who have worked in IT for years — both inside and outside of the legal vertical — we can say from experience that an organization’s technology training strategy for their professional team members is arguably the most important predictor of success when it comes to the adoption of e-discovery software tools. In fact, according to a 2017 survey by the Technology Service Industry Association, 64 percent of employees use a software product more after they have undergone any form of dedicated training.

Here are seven reasons why you would be well-served to create a culture of ongoing learning with e-discovery technology solutions:

1. Consistency

As the number of members on your team grows — and the inevitability of employee turnover changes the makeup of the staff roster — there is a risk that technology tools will be used in different ways by different individuals. Ongoing training helps to maintain consistency in the way these tools are used across the organization.

2. Benefits of new features

It’s customary for software providers to roll out new features and functionalities to their flagship products on a regular basis. In order for your organization to achieve the full efficiency benefits to be gained from those new features, it’s important that you have a pre-determined commitment to ongoing learning, so all of your users are properly instructed in the latest bells and whistles.

3. Translation of release notes

The B2B software industry has come a long way from the complex user manuals of the 1990s, but the “release notes” that accompany each new iteration of an e-discovery technology offering can still seem like they’re written in a foreign language for end users to decipher. In-person training that can be built into your culture of ongoing learning is crucial to help translate these release notes into intelligible information for your team members.

4. Adoption rate

It’s simple arithmetic: in order to obtain the desired return on your investment in technology tools, you need your staff to use the software that you acquired. A systematic approach to ongoing training will allow your organization to make sure that your professionals are more comfortable with the software tools and more likely to incorporate them into their daily workflow, increasing adoption of the technology and maximizing your return on investment in the software.

5. Just-in-Time training

The best software training programs are able to be deployed to meet individual needs for specific applications as they arise. For example, AccessData offers flexible training options to help e-discovery professionals get the most out of their tools and their teams. From on-site training to virtual classes, AccessData’s training program focuses on the customer, their workflow and the ultimate success of the organization, as AccessData’s experts collaborate with the customer’s e-discovery specialists to build a workflow-based training program. [Disclosure:  Oronde Ward works for AccessData, an ACEDS affiliate partner].

6. Professional obligations

In April 2018, the North Carolina State Bar Council approved a requirement that lawyers in the state must have one hour of CLE training annually that is devoted to technology training, following the example set by the Florida Bar in 2016 when it became the first state to mandate technology training for lawyers. Moreover, the revised ABA Model Rule 1.1 now requires “technology competence” as a matter of a lawyer’s ethical duties in the representation of clients. It’s clear that technology training is increasingly becoming a professional obligation in the legal profession.

7. Certifications

Within the world of e-discovery in particular, it’s becoming more important to identify talent for your organization that has the highest level of professional training — or to cultivate that talent by investing in your employees’ professional development. Certifications have become a key way of identifying that specialized skill set. For example, the Certified E-Discovery Specialist (CEDS) certification, administered by ACEDS, responds to the need for professionals with diverse skills and knowledge across the e-discovery spectrum. The exam is constructed with the help of 40 experts under the strict auspices of a psychometric firm and a worldwide survey, producing a neutral and legally defensible professional certification program that is respected throughout the e-discovery community. [Disclosure: Mary Mack is the Executive Director of ACEDS].

The head-spinning advancements in technology solutions that support the e-discovery workflow have resulted in substantial efficiency gains and cost containment for litigants. Unfortunately, the pace of innovation with the development of those tools has not matched by a commitment to ongoing training when it comes to how they are put to use.

By creating a culture of ongoing technology learning in your organization, you can maximize the return on your investment in software and ensure that the end users of those tools are driving greater efficiency and accuracy throughout the e-discovery workflow.

Moving On: Embracing Job Change in the E-Discovery Industry

The last time I looked for a job, I was in my 20’s. Gas cost less than $1.50 a gallon; George W. Bush was president; and no one binge-watched anything other than old Seinfeld episodes. I finished law school and had three job offers waiting for me. Being a bit adventurous, I decided to take the risky path and not practice law. I joined a tech company helping lawyers use software to find a needle in a haystack.

And that was 16 years ago – 16 challenging, fulfilling, and wonderful years in the e-discovery industry. In those years, I solved complex problems, built amazing tools, and helped clients navigate sticky situations. As career trajectories often go, now I find myself in transition, leaving my former employer and figuring out what’s next.

I know I am not alone in this transition. The e-discovery industry is anything but constant, and that includes the job landscape. Many of you have found yourselves in a similar place, voluntarily or involuntarily seeking your next career move.

As I look to the future, I have learned that my most valuable asset from the last two decades is all of you. Those of you reading this blog; those of you I used to work with; and those of you at the conferences and tradeshows. The e-discovery community is just that – a community. I want to share what I have learned in this transition period, along with the wisdom of others who have travelled this same journey.

My hope is that these experiences help you pursue a new job now or in the future.

Six Tips for E-Discovery Career Changes

1. Hunt like it’s your job, but take time to smell the roses. Job hunting is my new job, so I treat it like a job. That means every single day I spend time in pursuit of my future. My new office is the kitchen table, and the dog is my new co-worker. I check email, reach out to people, troll LinkedIn, go out for lunch, make “to do” lists. Phil Favro, a Consultant at Driven, recommends, “Keep your name out there, diversify your skill set, pursue new certifications, and most importantly, keep your reputation intact.”

However, as many of you know, working in the e-discovery industry is fully immersive, leaving little time for outside interests. If you are in transition from one job to the next, now is the time to do something meaningful. Volunteering, freelancing, hobbies, travel, friends. From personal experience, elementary school teachers and pro bono lawyer networks are thrilled to hear you have some extra time on your hands, and without even asking, you will find yourself with a fulfilling volunteer role.

2. Become a story-teller. To find that next opportunity, you need to share your story. What you did previously, why you left (or are thinking about leaving), where you want to go next. In this transition phase, I have come to value the multitudes of people who have been willing to talk to me. It means the world to hear from people in your network. They will help you refine your story, brainstorm networking avenues, and build your confidence.

Further, I have learned to be systematic about expanding my network. I keep a spreadsheet of everyone I talk to, what was said, who they refer me to, and the action items. Sometimes this means reaching out to people I have not talked to in 10+ years, asking a LinkedIn contact to make a referral for me, or cold-calling people I have never met. “As long as I have been networking, it still surprises me how truly small this world can be. When you are seeking a new opportunity, it is imperative that you talk to as many contacts as possible to leverage those relationships because you never know where those conversations will lead you. And, just as importantly, be helpful to those that are looking. Being able to connect a viable candidate to a company that needs a particular skill set will cement your relationships on both ends,” noted Denise B. Bach, CEDS, Vice President of Enterprise Sales, Stroz Friedberg, an Aon company.

3. Add letters behind your name. During your career transition, whether you are still employed or seeking work, there is no better way to propel your career than to attain a certification. Most of these certifications require passing an examination, which will help validate your experiences. Also, the process of preparing for and taking an examination will help you stay relevant in a changing industry. In the e-discovery industry, this could mean achieving an association-based e-discovery certification, adding a platform or tool certification, or extending into an adjacent space with a privacy or security certification. “Initials after your name validate specific, usually technical, experience. You will share the initials and what it took to earn them with others, who become part of your community,” said Mary Mack, CEDS, CISSP, CAIM, Esq., Executive Director, ACEDS. “I found, as a woman (and an attorney) that questions about my technical competence stopped after earning my CISSP. The CEDS community is very generous with its members in transition, ready to make introductions, help with resumes, and generally support our job seekers.”

4. Embrace headhunters. Staffing professionals are here to help you, but do your diligence. Ask people in your network which staffing companies they have used and ask for them to introduce you. “Having the right representation is more important than having just any representation,” says TRU founder and CEO, Jared Coseglia. “Something many candidates actively looking for a job do not realize is that once an agency sends your resume to a client, only they can represent you there for the next six to twelve months typically. So, choose your representation wisely, and make sure no one sends your resume anywhere without your express permission first.” Coseglia recommends asking these questions of any staffing agency:

  • What do you specialize in?
  • How often do you successfully place professionals with my profile? In my geography? In my industry vertical?
  • What separates your agency from others?
  • Are you reaching out to me for just a specific opportunity or will you have others like this?
  • Have you staffed for the company you are searching for in the past?

5. Look outside e-discovery. A former law school colleague said to me, “Stop being so timid in submitting applications.” He went on to enlighten me of a study showing that women only apply for jobs if they are 100% qualified, while men apply if they meet 60% of the criteria. I have learned to be bold in touting my experience, including looking for jobs outside of the e-discovery industry. “To become good at e-discovery, [it] requires a core level of knowledge, or even expertise, in many things, including computers, mobile devices, removable media, server systems, networking devices, cyber security, as well as organizational structures, business process and workflow and project management,” notes Eric P. Mandel, Vice President of Information Governance & Cyber Security Strategy at Ricoh USA. “All of this knowledge, and the skill sets that you develop while doing the job, are transferable into other roles in other areas.”

6. Get comfortable with uncomfort. Suppress your inner “type A” persona that tends to flock the e-discovery profession and learn to accept the present uncertainty. You will hear “no” a lot. Get okay with that and learn to move on quickly. “Use your situation as a chance to try something new. You may be rejected one, two, or even twenty times before the right opportunity comes along. Ask for feedback to help you better prepare for the next one,” says Jackie Rosborough, Independent Consultant and Executive Director of Women in E-Discovery.

An All Day, Coast to Coast Celebration: ACEDS E-Discovery Day Recap

It seemed like almost everyone who is involved in the world of e-discovery was celebrating on December 1st. The third annual E-Discovery Day was bigger than ever this year – according to fellow E-Discovery Day sponsor and ACEDS affiliate Exterro, “12 sponsoring organizations hosted 13 separate live events, with over 370 attendees, in 7 States plus the District of Columbia. More than 2000 virtual participants listened to 14 hours of news, analysis, practical tips, and advice presented by 39 e-discovery experts in 15 webcasts.”

And ACEDS was right there doing our part. ACEDS Executive Director, Mary Mack, participated on webinars as a speaker, moderator, or panelist from dawn till dusk, closing out the day with Top 5 E-Discovery Process Improvements Legal Needs to Make (But haven’t made yet…) – the most well attended event according webinar co-sponsor Exterro. This roundtable discussion featured William Hamilton, Director, UF Law; Hon. John Facciola (Ret.), US Magistrate Judge, D.C.; and Mary Mack, these experts, who are not only e-discovery teachers but have also navigated complex e-discovery projects, weighed in on what 5 e-discovery process improvements legal teams need to make to start seeing real results.

When asked what some of the biggest obstacles legal teams are facing when it comes to process improvements, two main themes arose: the lack of focus on understanding e-discovery, and because of this, the inability to match team members’ skills with specific tasks. As Judge Facciola said, “most [obstacles] come from attorneys not knowing how delegate the handling of a matter. They either need to truly understand it or give the jobs to someone who does. And at the same time, there are too many young attorneys not getting guidance from their superiors.”

William Hamilton supported this by saying, “law firms have a peculiar culture, similar to corporate settings, where there’s a lack of structure around e-discovery. Even within the same law firm, you’ll have attorneys with different levels of expertise regarding e-discovery.”

In another ACEDS sponsored webinar, 5 Critical Cyber Security Updates for Firms and Corporations in 2018, a similar theme of understanding and taking the steps towards competency, only this time on the cyber security side of things. Roy Zur, Intelligence Expert and CEO of Cybint (a fellow BARBRI company), explored the upcoming security trends for 2018 and what companies should do to prepare for new threats and intrusions. Roy Zur covered all the different types of threats and attacks in one of the clearest ways I’ve ever heard, next he went into the Dark Web – what it is, how it’s used – and then prevention, detection, and best practices for minimizing risk. It’s more important than ever for law firms to protect themselves. “Mainly it was financial markets and government and big retailers” Mr. Zur said of targets of cyber-attacks, “but now there are increased attacks on law firms, because a firm is a hub for a lot of confidential information, serving many companies.”

There were other events around the country with ACEDS chapters and affiliates as well. The ACEDS Philadelphia Chapter put together the largest gathering of e-discovery professionals in Philadelphia with ILTA, ARMA Liberty Bell Chapter, and Women in eDiscovery Philadelphia Chapter. Exterro sponsored the event, which focused on the topic of “The good, the bad and the ugly of ISO 27050-3 – Code of practice for electronic discovery.”

The Twin Cities ACEDS chapter and Mary Mack hosted a panel discussion regarding the appropriate disposition of client data: “The Case is Done but the Data’s Still Everywhere. What’s a Client to Do?” For many clients this can be the biggest headache, so this webinar looked at considerations when looking at the security of data once that data gets to law firms and providers; what measures should be taken to protect data held by those organizations, and how to vet those measures; and, how should the client ensure appropriate disposition of data by their law firms and vendors at the end of a matter.

And finally, Mary was at it again, this time with LTPI Chairman and President, Eric P. Mandel, and three of the Discovery Data Governance Model co-authors, Quintin Gregor, Kevin Clark and Seth Eichenholtz, to discuss explore the state of the industry, and to examine LTPI’s DDG project, as well as the ACEDS / LTPI relationship.

Other ACEDS chapters had gatherings around the country as well: The New England ACEDS chapter kicked things off with a breakfast roundtable discussion, while our friends in Florida were celebrating E-Discovery Day with the Jacksonville and South Florida chapters both had educational and networking events. There was also a New York City networking event sponsored by LTPI, ACEDS, WIE, and Exterro. There was even an ACEDS hosted E-Discovery Day Twitter chat (click here for highlights).

It’s easy to see why E-Discovery Day has become the year-end event for the industry, and we’re already looking forward to next year’s celebration!

The Internet-of-Things May Be New, But the Legal Processes Remain the Same

Just when eDiscovery specialists have gotten a handle on the more common forms of Electronically Stored Information (ESI), and have begun adapting to the newer data types such as cloud and app based messaging platforms, along comes the Internet-of-Things.

One might ask, “What things?” Hasn’t the internet always been about things? Well in this case, ESI isn’t created by traditional (or more recent for that matter) devices such as a desktop or laptop computer, smartphone or tablet, but instead is created by a “thing” – refrigerators, cars, Fitbits, home management devices, doorbells, you name it. All of these can now create data which can be used as evidence during litigation.

Tom O’Connor, Director, Gulf Coast Legal Tech Center, recently wrote an article for Advanced Discovery about how the IoT is affecting the eDiscovery landscape. In it, he gives a brief history of the Internet of Things as related to the legal world, and through that we see that it’s actually been lurking around for almost a decade, but not until more recently are eDiscovery teams getting on board with the need to preserve, collect, and review this type of data. For example, Tom mentions how the data from an airbag was used during a trial regarding a car accident in the British courts back in 2008. And in 2011 in Massachusetts, another car’s “black box” data was used in court.

More recently, when writing for our affiliate, Exterro, I’ve addressed different murder cases – one where the data from a victim’s FitBit was used; in the other case, data was requested from the home’s Amazon Echo to see if it had recording anything during the time of the murder (ACEDS addressed the Alexa case in Canada and Craig Ball exposed the data collected by this talkative IOT device). And on the civil side of litigation, there was the case where the Boston Red Sox were caught using Apple watches to steal signs from the Yankees.

So how can legal teams prepare for this? Tom O’Connor suggests five steps:

  • Know the Facts
  • Know the People
  • Make Early Case Analysis Part of Every Case
  • Validate Current Data from Past Results
  • Collaborate

As Tom states, “Engaging in these five steps will enable litigators to collect all the necessary information to evaluate exposure, assess risk, make recommendations to clients, and set case strategies, including a budget and possible settlement options.”

New data types are always changing, but the processes remain the same. This is where continued legal technology training becomes useful. Someone who knows the law and the steps that need to be taken during Someone who knows the law and the steps that need to be taken during discovery — the most expensive and time-consuming part of litigation — as well as understands the different types of ESI and how they can be collected, is going to be successful in eDiscovery.