Beyond Rio Tinto: Real-world Proof Points for Analytics

| | Analytics, Assisted Review, Seen in the Field

Earlier this month, we posted a two-part series on our exciting—albeit rocky at first—journey with analytics in e-discovery, as well as some takeaways from Judge Peck’s recent Rio Tinto opinion. As we’ve mentioned around this topic, there’s been a lot of growth in analytics usage over the last couple of years.

But how has all this translated to tangible benefits? We’ll break it down into our three goals for analytics in e-discovery: making e-discovery faster, easier, and more insightful. Check out a few real-world stories below, and visit our Customer Wins page for more.


InventusLaw firm Thompson & Knight recently approached Inventus, an e-discovery service provider, for support on an internal investigation that required an immediate start to review—and by “immediate” we mean they collected data on a Friday and needed prioritization done by Monday.

Thompson and Knight Law FirmThough initially wary of the timeline, the Inventus team set up a computer-assisted review project to prioritize the 86,000 documents a keyword search returned from the initial 1.8 million-record set. After only one training and two QC rounds, the team achieved an overturn rate of just 9 percent.

With this analytics workflow, the case team managed to review just 2,100 documents and immediately prioritize a responsive set of about 12,000 records—all in a single weekend.… Read More »

5 Principles to Guide Your Internal Learning Program

| | New Resources, Seen in the Field

Last month, my colleague Stan Pierson posted an article about building your team’s e-discovery IQ, outlining five steps for helping in-house legal teams be successful with their end-to-end litigation strategy. As he points out, building a training program tailored to your team’s needs is one of the most important steps for success—but it’s not always clear where to start, so I thought I’d add a bit to the discussion.

In the past, I’ve shared key tips for developing an internal training platform and discussed these principles at Relativity Fest, but I thought I’d dive a little deeper into what to keep in mind when building a training program—and how to get started.

1.    Adults are autonomous and self-directed.

Involve your team in choosing interactive trainings. A good start is gathering your team’s perspective about what topics to cover and inviting them to identify opportunities that reflect their interests. For example, would they prefer an introduction to collection best practices? A deep dive into recent court cases influencing e-discovery? A hands-on training on text analytics?

Get Started: Before formalizing a learning schedule, gather your team for a brainstorming session on what they’d like to learn. Get consensus to establish topics to prioritize for the team and invite suggestions for content or trainings of interest.

Read More »

How to Review Text Messages Most Efficiently

| | New Resources, Review & Productions, Seen in the Field

Our team sometimes gets asked about reviewing text messages, and one of our Premium Hosting Partners, D4, recently put together an awesome blog post that addresses the issue.  We wanted to share it with all of you. The post was originally published on D4’s Discover More blog in January.

D4 - Powered by PeopleThere are very few people today who don’t thumb text messages on their phones. We tend to treat text messages as if they can’t be retrieved once we hit send. “Nobody will find this” one may tell themselves. Oh really? What happens when opposing counsel requests that text messages are included from one of your key custodians? At first, you object in that your client’s text messages are not “reasonably accessible,” but that argument isn’t as easy to win as it used to be. Once you’ve accepted the fact that review of the text messages is going to happen, the question hits you“How can text messages most efficiently be reviewed?”

The answer may surprise you. Think of each individual text message as a record (like an email or Word document). If properly collected, each text message record has metadata associated with it that can be used to stitch together the bigger story.… Read More »

What I Learned from Relativity Fest: Altep on Analytics

| | Analytics, Seen in the Field

AltepToday’s post is contributed by Hytham Aly, a database administrator at Altep, a Relativity Premium Hosting Partner. If you’re interested in contributing an article about your experience at Relativity Fest, please let us know.

We recently hosted a webinar in which our legal team ran through a condensed version of the U.S. Patent Database search that Hytham discusses below. Check out the recording here, and a Q&A follow-up post here.

At 2014’s Relativity Fest, I was lucky enough to attend “Using Analytics to Search the U.S. Patent Database,” a session on both patent law and analytics. That’s right. What could this notoriously meticulous and cumbersome field of law and one of the most complex yet exciting technologies of our industry have in common? I expected a lot of technical talk and predicted I might sit there like a deer in headlights.

Relativity Fest Insights

But to start off, we learned about a common problem many inventors face: not knowing if their invention is infringing on an existing patent. Finding the answer is a big data problem, but it’s also an e-discovery problem. If we could solve a complex and fundamental problem like massive document review for litigation, we could implement the solutions in other fields as well.… Read More »

What A Long Strange Trip It’s Been: Our History with Analytics in e-Discovery Part 2

| | Analytics, Assisted Review, Seen in the Field

This is part two in a two-post series about our experience with text analytics in e-discovery.

In last week’s post, we described the somewhat rough beginning of our journey with analytics in e-discovery. We also talked about the growth in adoption of the technology that we started seeing in 2011. But prior to 2011, what were the barriers that made adoption of analytics tough in e-discovery—and are they still barriers today?

What Barriers Have We Seen?

Judicial acceptance

If you’re questioning whether or not computer-assisted review is defensible, ask Bennett Borden, chair of information governance and e-discovery at Drinker Biddle. Bennett is one of the most tech savvy, well-versed, and well-spoken litigators on the subject of e-discovery and information governance I’ve ever met. He’s been using analytics and advanced investigative workflows in his cases for years. His response to the question of defensibility is:

“Absolutely. It’s more defensible than any process you’re currently using. The technology has been approved faster by the courts than any other technology, including search terms. It’s been proven that it’s more effective than human review, and the courts have never denied a producing party’s request to use it.”

Prior to several landmark decisions, however, it took a somewhat intrepid attorney like Bennett to be willing to use analytics technology regularly.… Read More »

More Da Silva: 3 Takeaways from Judge Peck’s ‘Rio Tinto’ Opinion

| | Assisted Review, Seen in the Field

Shareable Lines

Judge Peck emphasizes cooperation in #RioTintoOpinion:
#RioTintoOpinion serves as intriguing bookend to Da Silva Moore:

Last week, Magistrate Judge Andrew Peck (U.S. District Court of the Southern District of New York) issued an opinion and order which, while perfectly serviceable as a standalone decision, even better serves as an intriguing bookend to his landmark February 2012 decision, Da Silva Moore (Da Silva Moore v. Publicis Groupe & MSL Grp., 287 F.R.D. 182 (S.D.N.Y. 2012))—the case that threw the door wide open to the use of computer-assisted review.

The new case, Rio Tinto Plc v. Vale S.A., 2015 U.S. Dist. LEXIS 24996, 8 (S.D.N.Y. Mar. 2, 2015), in some ways is not a ruling at all, but rather an endorsement of the parties’ cooperation in creating mutually agreed-upon protocols. All that was required was judicial signoff. However, “because of the interest within the e-discovery community about [technology-assisted review (TAR)] cases and protocols,” Judge Peck decided to weigh in and tie up some loose ends.

It’s also interesting to note that both parties decided to use computer-assisted review of their own accord. Plaintiff Rio Tinto opted to use our own Relativity Assisted Review via Precision Discovery, while the defendant, Vale S.A., is working with Deloitte‘s Dynamic Review.… Read More »

What A Long Strange Trip It’s Been: Our History with Analytics in e-Discovery Part 1

| | Analytics, Assisted Review, Seen in the Field

This is part one of a two-post series about our experience with text analytics in e-discovery. Check out part two here.

Legaltech 2008

Getting ready for Legaltech in 2008 was a wild ride—and the beginning of our long strange trip with analytics in e-discovery. It’s a trip we’re still on, with a mission that remains the same: make it fast, easy, and maybe even a bit fun, to find your relevant and important documents. We want to make functionality like computer-assisted review, email threading, and finding conceptually similar documents as easy—and as trusted—as using email.

While we had analytics capabilities in Relativity as early as 2004, we started the task of reimagining it in late 2007. The product development work was spilling over into the first days of the New Year, getting dangerously close to our deadline at the end of January, when we planned to demo the technology at Legaltech. Andrew’s office was littered in sticky notes with tasks that he and our development team of 3 or 4 had to complete to get to releasable code.

Sticky Note Office

It looked like we wouldn’t make it, but several all-nighters by Andrew and the team got it done. In late January 2008, we were ready to start demonstrating how these new capabilities could accelerate review workflows and amplify the efforts of case teams.… Read More »

5 Steps to Raising Your Team’s e-Discovery IQ

| | New Resources, Seen in the Field

Now more than ever, litigation readiness and e-discovery are big initiatives that require an in-depth understanding of new processes and technologies. The explosion of electronically stored information has simply made it impossible to manage litigation, internal investigations, and government requests without this insight.

While your in-house legal team is already busy staying on top of intellectual property, monitoring case law, managing compliance, and more, how can you also stay ahead of the curve in e-discovery? Break it down into five steps:

Checklist ItemStep #1: Partner up.

Some service providers and law firms have been in this space since before discovery became electronic. A wealth of knowledge is available to you through partners you may already know and trust. Whether you’re outsourcing, seeking consultation, or using a managed services arrangement, simple conversations with experienced partners are a great way to begin educating your team without requiring any extra resources. By asking questions and staying up to date on what’s happening in e-discovery technology and the courts, you can add your business’s unique insight to theirs and better manage your projects.

Checklist ItemStep #2: Hire for expertise.

Expect and encourage software certifications among your team as well as your partners. Project managers with PMP and JD designations, case strategists with backgrounds as litigators, and staff with e-discovery certifications add tremendous value in ramping up information management and e-discovery initiatives and ensuring projects stay on track. … Read More »

What Every e-Discovery Professional Should Know About FRCP 37(e)

| | Seen in the Field

Shareable Lines

Proposed FRCP changes could mean more specific guidelines on courtroom reactions to spoliation:
FRCP revisions focus on moving forward after accidental data loss:

Legal and e-discovery professionals are no doubt aware that some changes are coming to the Federal Rules of Civil Procedure this year. The revisions are set to take effect in December, assuming they are approved by May—and we’re hearing a lot of interest in the proposed changes to Rule 37(e), which focuses on spoliation.

Let’s take a look at the expected changes so you can get an idea of what’s on the horizon and assess whether you need to make any changes to the way you collect and store ESI.

The current rule 37(e) reads:

“Failure to Provide Electronically Stored Information. Absent exceptional circumstances, a court may not impose sanctions under these rules on a party for failing to provide electronically stored information lost as a result of the routine, good-faith operation of an electronic information system.”

Rule 37 was amended in 2006 to include subsection (e) and was sometimes referred to as the “safe harbor” rule. The rule was originally considered a safe harbor because, in its attempt to protect those parties whose good faith operations result in lost ESI, there was no clear guidance for how or when courts should impose sanctions.… Read More »

A Cat Stroller, a Killer Patent, and 6 Questions on Text Analytics

| | Analytics, Platform Story, Seen in the Field

This week, my colleague Ryan Hynes and I took the virtual stage for a KMWorld webinar, presenting “Text Analytics in Action: Finding the Killer Patent.” During the live session, we used Relativity to pounce on a single patent—a cat stroller idea Ryan believes pretends could make him millions—from the U.S. Patent & Trademark Office’s database. With his fictitious cat as co-pilot, the journey was all about giving viewers an in-depth look at text analytics.

Our stroller search was run in a real environment our legal team uses each week to search the U.S. patent database for new intellectual property that’s relevant to our business. Built on Relativity Analytics, it helps us identify records of interest much more quickly than keyword searching alone in the USPTO’s online database.

In addition to several cat puns, the webinar yielded some great questions about text analytics from the audience. Here’s some insight we didn’t have a chance to share live:

What are some best practices when choosing documents to include in an analytics index?

In general, you want to train the system on documents that have an adequate amount of extracted text and were created by humans, as opposed to machine-generated log files.… Read More »