Kroll Ontrack was proud to host a breakfast seminar last week on Predictive Coding and how it’s bringing innovation to legal practice.
Over 90 legal professionals from law firms and corporations across the UK gathered at the early hour of 08:00 for a light (and far too healthy!) breakfast and to hear our very special guest speakers: Ralph Losey, a partner in the Orlando office of Jackson Lewis LLP and serving as the firm’s National e-Discovery Counsel, who had flown in especially from Florida for the event and Neil Mirchandani, a partner at Hogan Lovells in London specialising in financial services disputes.
Daniel Kavan of Kroll Ontrack helped moderate the session for what turned out to be a very interactive debate with members of the audience.
We found that there are a lot of terms used for describing predictive coding technology, such as “technology assisted review,” “computer-assisted review,” “computer-aided review,” and “content based advanced analytics.” Ralph helpfully pointed us to a very useful definition of predictive coding by The Grossman-Cormack Glossary of Technology Assisted Review, written by Maura R. Grossman and Gordon V. Cormack:
An industry-specific term generally used to describe a Technology-Assisted Review process involving the use of a Machine Learning Algorithm to distinguish Relevant from Non-Relevant Documents, based on Subject Matter Expert(s)’ Coding of a Training Set of Documents. See Supervised Learning and Active Learning.
Ralph Losey, who is seen as leading global expert on predictive coding, opened the session by providing a very helpful summary of what predictive coding is and how he has seen it applied. Neil Mirchandani was then on hand to provide a UK perspective and outline his experiences of the use of the technology.
I have highlighted some of the key points that were raised:
- We heard some interesting and surprising stats as to the consistency of a human review, for instance, studies undertaken have shown that one reviewer has a consistency of 77% when reviewing documents, this figure drops to 45% when there are two reviewers and plummets down to 30% when there are three or more reviewers. So perhaps a human review should not be seen as the gold standard for completing review exercises.
- Predictive coding is not a substitute for a human review and should be seen as a supplement. Predictive coding is very reliant on the input of subject matter experts (SME) via the review of a sample set of documents to “train“ the system and for this to be an on-going and iterative process as the system evolves.
- There were some lively debates as to whether the initial training should be completed by one SME or a few.
- Predictive coding can be utilised as an invaluable quality assurance mechanism for a human review and even if predictive coding is used for tagging, any documents that are deemed to be relevant can still be reviewed by a human team.
- A number of audience members had queries about whether it has been challenging to reach agreement with the other side if this this type of technology were to be used. The consensus from the panel seemed to say that that it would be difficult to try and “force” another party in litigation to deploy this technology, but it would be very unlikely (and difficult) that an opposing party could object to the technology being used.
- The technology should not be viewed as exclusive to large and litigious cases – there were some great examples of the technology being deployed, and successful, in internal investigation and regulatory exercises and for cases consisting of say 40,000 documents.
There were many other very useful insights to come out of the workshop, but unfortunately there isn’t space to fully cover it on this blog. If this topic is of interest, you will certainly find Ralph Losey’s blog to be helpful, as it goes into full detail about his various studies.
Feel free to get in touch if you would like to have a chat about the application of this technology in more depth.