LegalTech New York – A European Perspective
The LegalTech New York tradeshow is the biggest trade show and conference in the world in the Legal Technologies and E-Discovery field. Although it is truly international in terms of attendance, as you would expect from a conference in New York City, the majority of speakers, exhibitors and delegates are from the United States. This is no reason for European practitioners to ignore it – it is useful to keep an eye on developments in the US, as over the last decade or two, trends from the US have followed in the UK and, and to some extent, continental Europe, in subsequent years.
So what were the trends in the at LegalTech 2013 which we can expect to hop “across the pond?”
Technology Assisted Review (TAR) was certainly at the forefront. It is beginning to be widely accepted by lawyers in the US as a defensible method of augmenting the document review process and the US courts are starting to hand down decisions to support this. TAR, also referred to by some as “Computer Assisted Review” (CAR) or “Predictive Coding,” uses machine learning prioritise important documents and suggest categorisations for review, resulting in faster, more efficient and more accurate review. I believe UK courts will not be far behind their US counterparts in approving parties’ use of this technology. We may well see this in 2013, with the official implementation of the new Practice Direction 51G for managing cases and their costs, with a significant emphasis on proportionality.
Apart from the vast array of exhibitors showcasing their latest technology, there was also a series of interesting educational seminars. A highlight for me was fun session moderated Kroll Ontrack’s Chris Wall, featuring Ralph Losey (Partner and National e-Discovery Counsel at Jackson Lewis LLP) and Jason R. Baron (National Archives’ Director of Litigation) on the panel. They simulated a document review exercise by displaying a number of documents to the audience and asked audience SMS whether they thought they were relevant or not, based on a basic set of defined relevance criteria. The opinions as to relevance varied significantly amongst audience members. This was a fun way to illustrate how human document review can yield lower than expected precision and recall in a document review, and led to discussions on how TAR might help supplement a review to improve overall accuracy.