Nearly a decade after co-founding Cyberjustice Laboratory, a unique hub that analyses the impact of technologies on justice while developing concrete technological tools that are adapted to the reality of justice systems, Karim Benyekhlef and Fabien Gélinas have set their sights on artificial intelligence.
Algorithms, the set of instructions computers use to carry out a task, have become an integral part of everyday lives, and it is immersing itself in law. In the U.S. judges in some states can use algorithms as part of the sentencing process. Many law enforcement officials in the U.S. are using them to predict when and where crimes are likely to occur. They have been used for years in law firm recruitment. And with advancements in machine learning they are also being used to conduct legal research, predict legal outcomes, and to find out which lawyers win before which judges.
Most algorithms are created with good intentions but questions have surfaced over algorithmic bias at job hunting web sites, credit reporting bureaus, social media sites and even the criminal justice system where sentencing and parole decisions appear to be biased against African Americans.
And the issue is likely to gain traction as machine learning and predictive coding become more sophisticated, particularly since with deep learning (which learn autonomously) algorithms can reach a point where humans can often no longer explain or understand them, said Nicolas Vermeys, the assistant director at Cyberjustice Laboratory in Montreal.
AlphaGO is a case in point. When AlphaGO, Google’s artificial intelligence system, defeated the 18-time world champion in the complex and highly intuitively game of the ancient Chinese board game GO, it was not just a demonstration of yet another computer beating a human at a game. GO, a game with simple rules but profound complexity, has more possible positions than there are atoms in the universe, leading some to describe it as the Holy Grail of AI gaming. It was a remarkable feat because AlphaGO was not taught how to play Go. It learned how to play, and win, by playing millions of games, using a form of AI called deep learning, which utilizes neural networks that allow computer programs to learn just like humans. More than that, the victory showed that computers are now able to rely on its own intuition, something that was thought only humans could do.
Another example is Deep Patient. The brainchild of a research group at Mount Sinai Hospital in New York, it is a machine learning tool that was trained to detect illness from data from approximately 700,000 patients. Deep Patient turns out to be good at detecting hidden patterns in the hospital data that indicate when people are becoming ill. It also appears to be really good at anticipating the onset of schizophrenia, a very difficult disease for physicians to predict. But the people behind Deep Patient do not yet understand why Deep Patient seems to be good at predicting schizophrenia and do not understand how it works.
“We have no idea how algorithms arrived at their decision and therefore cannot evaluate whether the decision has value or not,” said Vermeys, whose research institution is studying the issue of algorithmic bias. “There is a risk to relying completely on machines without necessarily understanding its reasoning.”
No human is completely objective, and so it is with algorithms as they have been programmed by programmers, noted Ian Kerr, a law professor at the University of Ottawa and the Canada Research Chair in Ethics, Law and Technology. Programmers operate on certain premises and presumptions that are not tested by anybody else which leads to results based on those premises and presumptions which in turn gives rise to bias, added Kerr.
On top of that it is very difficult to challenge such decisions because “whoever owns the algorithms has trade secrets, isn’t likely to show you the source code, isn’t likely to want to talk about the secret source and what makes the algorithm work,” said Kerr. “What justifies the algorithm is its success or perceived success which is very different from whether or not it operates in biased ways.”
Aaron Courville, a professor with the Montreal Institute for Learning Algorithms, shares those concerns. “We are really in a phase where these algorithms are starting to do interesting things, and we need to take seriously the issues of responsibility,” said Courville.
Europe is taking a serious look at these issues. Under the European Union’s new General Data Protection Regulation (GDPR), automated individual decision-making that “significantly affect” users will be restricted, argue Bryce Goodman of the Oxford Internet Institute and Seth Flaxman of the University of Oxford’s Department of Statistics in a paper. Expected to be in force in 2018, the GDPR will also effectively create a “right to explanation,” according to the authors. In other words, users can ask for an explanation of algorithmic decision that was made about them.
“This is where Europe and the U.S. go wild in their disagreements,” explained Kerr, who has also written about the issue of a right to explanation. “Europe starts with this principled approach that makes sense. If a decision is about me and it has sort of impacts on my life chances and opportunities, I should be able to understand how that decision was made. It invokes large due process concerns.
“The due process idea is that no important decision should be made about me without my own ability to participate. I have a right to a hearing. I have a right to ask questions. So all of these kinds of rights are kind of bound up in this notion of the duty to an explanation. And the hard thing is that an algorithm isn’t in the habit of explaining itself, which means that if that kind of law prevails then people who use algorithms and design algorithms will have to be a lot more forthcoming about the mechanisms behind the algorithm.”
Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks by ProPublica, an American independent, nonprofit news organization.
Chief Justice John Roberts is a Robot by University of Ottawa law professor Ian Kerr.
Fernando Garcia is looking forward to the day when he can get his hands on Beagle, an automated contract analysis system powered by artificial intelligence that reads contracts in seconds, highlights key information visually with easy-to-read graphs and charts, and gets “smarter” with each reviewed contract. Also on his bucket list is an offering by yet another Canadian legal tech start-up, Blue J Legal, that too uses AI to scan legal documents, cases files and decisions to predict how courts will rule in tax decisions. At a time when the majority of in-house counsel are under intense pressure to shave costs and run a lean team, such powerful tools are a godsend. “There’s always that pressure to do more with less so when a tool comes along that can provide more efficiency, more risk mitigation, and can let you do your job better and focus on providing value-added, it is a strategic advantage,” noted Garcia, general counsel, government affairs and corporate secretary with Nissan Canada Inc. “It’s going to fundamentally change our job.”
The former Justice of the High Court of Australia Michael Kirby was obviously on to something. Nearly two decades ago, he remarked with uncanny prescience in a speech before the Bombay High Court in Mumbai that “it would be a bold observer” who would deny the possibility of artificial intelligence to “enhance” lawyering and judicial-making. But even he could not foresee how artificial intelligence is now in many ways already everywhere. Ever since Watson, IBM’s AI system, captured the public imagination and blew away the tech industry six years ago when it defeated two champions at the popular television quiz show Jeopardy, the technology has been developing at a dizzying pace and has immersed itself into business and in the daily lives of people across the four corners of the world. Smart phones feature virtual personal assistants like Siri or Google Now. Large U.S. retailers like Amazon and Target use AI to anticipate the needs of consumers through the use of predictive analytics. Financial institutions use it for fraud detection. Smart home devices have the ability to learn a person’s behaviour patterns by adjusting the settings of appliances or thermostats while self-driving cars are inching its way to reality. And AI systems are detecting cancers. “It’s moving so quickly it’s even a little mind-boggling for us,” remarked Aaron Courville, an AI researcher at the Montreal Institute for Learning Algorithms (MILA).
The practice of law however has been largely shielded by technological developments over the past fifty years, suffering little more than glancing blows. While the way that law professionals process and share information has evolved with new technologies, primarily with the emergence of personal computers, email, and the Internet, it did not fundamentally transform it.
That may be on the cusp of changing. Fuelled by big data, increased computing power, and more effective algorithms (a routine process for solving a program or performing a task), AI has the potential to change the way that legal work is done, the way that law firms conduct business and the way that lawyers deal with clients. A number of technologies under the umbrella of artificial intelligence, such as machine learning, natural language processing, experts systems (the ability to emulate decision-making of a human expert) and others, allow computers to perform things that normally require human intelligence. Artificial intelligence systems, also known as augmented intelligence or cognitive computing, can be used to do many of the tasks lawyers routinely perform in areas such as compliance, contract analysis, case prediction, document automation, and e-discovery. According to proponents, the emerging technologies will do it cheaper, faster, and more efficiently, a development some law practitioners find disconcerting.
“What machines give you is the option to get access to more and more data faster and cheaper – that’s the real core of it,” explained David Holme, chief executive officer and founder of Exigent Group Limited, a global provider of legal process outsourcing services that leverages machine learning technology for discovery and contract processing. “It’s like a search light that can look into the corners of the organization. Machine learning and better information will allow experts to make better judgments. And experts must be humble enough to realize that this is a tool that they can use rather than being threatened by it.”
Canadian legal tech start-ups drawing attention
Some law firms are paying heed. A number of Canadian legal tech start-ups are beginning to draw attention in a market that traditionally has shied away from embracing technology with much enthusiasm. ROSS Intelligence, the brainchild of a group of University of Toronto students, has become the poster boy illustrating AI’s potential in the legal world. A virtual legal assistant powered by IBM Watson and its own proprietary innovations, ROSS uses natural language processing to understand questions posed by lawyers, sifts through legislation, case law and secondary sources and returns an evidence-based answer. ROSS does more. It constantly monitors the law and uses its machine learning capabilities to continuously improve its results, which in turn produces results more quickly. ROSS began by learning bankruptcy law, but the firm layered it on top of that with intellectual property law, “which proved our hypothesis that we could scale ROSS’ learning between practice areas,” said Andrew Arruda, ROSS’ chief executive officer and one of the co-founders. “The goal is to build an entire ecosystem of legal AI’s which enhance lawyer’s abilities.”
The firm is also at the preliminary stages of using ROSS’ “underlying learnings and technology” to internal firm documents which would represent a “massive step forward” for knowledge management, added Arruda. That would certainly pique the interest of law firms and legal insurance protection insurers, noted Scott Ferrauiola, associate general counsel at Watson IBM Corporation at a conference held last fall in Montreal. Law firms and insurers are drawn by the possibility of being able to harness the power of AI to identify, capture, evaluate, retrieve and share all of an organization’s information assets, said Ferrauiola. “Who are your experts on certain legal issues? Do they have memos or briefs? Where are they? Can we access them? Can we search them? It’s almost a back-office function. It’s not quite decision-making but it helps in decision-making,” added Ferrauiola.
Using machine learning to predict legal outcomes is another area that may sway lawyers to explore the potential AI holds, according to experts. Last year the Lord Chief Justice of England and Wales warned jurists that AI will be better at predicting the outcome of cases than the “most learned Queen’s Counsel” as soon as it has better statistical information. That day may have come. In a breakthrough development, computer scientists last fall using AI reached the same verdicts as judges at the European Court of Human Rights in nearly four in five cases involving torture, degrading treatment and privacy, marking the first time that AI successfully predicted the outcomes of a major international court by analyzing case text. “This can be useful, for both lawyers and judges, as an assisting tool to rapidly identify cases and extract patterns which lead to certain decisions,” noted the authors of the study.
Blue J Legal too is of the same ilk. The Canadian legal tech start-up boasts that its AI simulation product, Tax Foresight, is able to predict with greater than 90 per cent accuracy what a court would hold in new circumstances. The tool has the additional allure of being simple to use: the machine learning tool asks questions about the client’s situation, and then it analyzes thousands of case law produced by the Tax Court of Canada, the Federal Court of Appeal and the Supreme Court of Canada. The AI system then provides a prediction, a tailored explanation, and a list of relevant cases for further research. “It will make a prediction based on all of the cases, and not just the leading cases,” explained Benjamin Alarie, the Osler Chair in business law at the University of Toronto, and one of the co-founders of Blue J Legal. He maintains that such technologies will change the nature of litigation as it will increase the likelihood of settlement, while the likelihood of cases going to court will fall, “save perhaps for the most ambiguous,” where further legal development will be most valuable. “These are tools that allow people to perform some elements of their jobs better, and these algorithms can do a better job in certain things. It’s a very powerful compliment to human judgment.”
Law firms are a hard sell
But it remains that law firms are proving to be a hard sell. A recent survey reveals yet again that the vast majority of law firms are uncomfortable being early adopters. According to a 2016 International Legal Technology Association-InsideLegal Technology Purchasing Survey, more than half of all firms (53 per cent) reported larger tech budgets in 2016 than in 2015, and the majority focused their efforts on bolstering cybersecurity, information governance, business continuity or disaster recovery concerns, and security compliance requirements. A staggering 87 per cent of respondents said their firms are currently not evaluating or utilizing artificial intelligence technologies or systems.
In many ways those figures are not surprising. For one thing, the legal industry spends less than one per cent on research and development compared to an average of 3.5 per cent for the average U.S. business, according to Dan Jansen, the head of NextLaw Labs, a business accelerator focused on investing in, developing and deploying new technologies to transform the practice of law. For another, even law firms themselves acknowledge that investing in new technologies is a challenge, mainly due to the traditional partnership model.
“Law firms are notoriously slow to adopt new technologies,” said Elizabeth Ellis, director of knowledge management at Torys LLP. “Our decision-making process is not what I would call optimal necessarily. We just seem to take a long time to evaluate something, to get all of the views.”
On top of that, most lawyers view AI as a threat instead of seeing it as an opportunity to help deliver better outcomes for clients, said Jordan Furlong, an analyst of the global legal market with Law 21. A recent study by McKinsey & Co estimates that 23 per cent of lawyer time is automatable while similar research by the highly respected AI expert Dana Remus at University of North Carolina School of Law concludes that just 13 per cent of lawyer time can be performed by computers.
“When lawyers turn their minds to AI one of the first questions they are essentially asking is will it replace me,” said Furlong. “That is the wrong question. It’s not about the lawyer. It’s about the client. The question a client will ask is whether using AI will help me get what I need faster, more affordably or more effectively, with a better outcome.”
All of which does not bode well for traditional law firms. A recent global research study by Deloitte concluded that conventional law firms are no longer meeting today’s business needs. The majority (55 per cent) of participants in the study – legal counsel, CEOs and CFOs — have taken or are considering a significant review of their legal suppliers. The study also points out that purchasers of legal services want better and more relevant technologies, to be used and shared on integrated platforms.
Legal process improvements
Some law firms have seen the writing on the wall. “Our business is actually to make it as easy possible for clients to solve things in the most practical efficient way for them, and that’s why I get excited about the role that law firms can play because we should be best positioned to be the problem solver, this reaggregator of all these different pieces and solutions so that what the client sees at the end of the day is this simple integrated solution to the different problems that they have,” said Matthew Peters, the national innovation leader at McCarthy Tétrault. The risk that some law firms may run into is that they will be seduced by the hype surrounding AI, erroneously believing that it will solve “all sorts of problems,” without examining all of their options, added Peters. Before an AI system is considered, attention should be turned towards legal process improvements, labour arbitrage and employing more efficient work tools, suggested Peters. A case in point is a new document automation service, complete with e-signatures and a contract management tool, that McCarthy developed in partnership with Exigent that will be rolled out in the near future for its clients. “Let’s make sure that we are addressing what the client needs and not make this more complicated than it needs to be,” said Peters. That doesn’t mean that Peters is not interested in AI offerings. On the contrary. In fact he is now testing a series of AI products before settling one which he intends to launch in a couple of months.
A push to meet the needs of clients also drove Osler, Hoskin & Harcourt LLP to examine, try and ultimately implement new technologies, including a couple of AI offerings. Clients were demanding that the law firm provide legal services more efficiently at a lower cost, explained Mara Nickerson, Osler’s chief knowledge officer. While exploring different options to meet growing client demands, the focal point throughout the exercise was centred around legal process management. “The focus needs to be on where you can gain efficiencies in your process and what technology can help you,” noted Nickerson. “If it’s AI, great, but not AI for the sake of AI.” Osler eventually settled on using an AI e-discovery tool called Relativity, has been using since August 2016 Kira Systems, a machine learning contract analysis system, and tested Blue J Legal’s Tax Foresight – all of which yielded positive results. But even then, Osler determined that in order to get the “maximum value” out of these AI offerings that they would have to be placed in the hands of a dedicated team who spent time to “really” learn how it works so that they could “train” the system. “The exciting thing about AI is that it is bringing additional functionality and capabilities to technologies that we didn’t have before and so bringing exponential efficiencies to our processes in a way that we haven’t said,” said Natalie Munroe, the head of Osler Works – Transactional, a new technology-based platform based in Ottawa to support coverage of corporate deals. But, added Nickerson, all of these new technologies need oversight by lawyers to review and grasp the nuances of the responses churned out by the machine learning systems.
Implementing new technology, especially involving AI systems, needs to be carefully planned, requires time, ongoing support, and buy-in from associates and partners. “You need to continue to evolve your practice as the technology improves, and as you work more closely with the program you start seeing more opportunities to use the technology that you may have not realized originally,” pointed out Ellis, who was speaking from her experience overseeing the implementation of Kira. “That all takes time and effort, and that is probably the hardest thing.”
Some would argue that the most challenging task is to convince lawyers within the firm or legal department to use the new technology. Buy-in in the middle ranks is critical, said Jansen of NextLaw Labs, an autonomous, wholly owned subsidiary of global law firm Dentons. “You have to have buy-in across the board, make sure you can drive the implementation and the integration and put project management skills against it, and then manage expectations about what the tool is and what it’s not,” added Jensen.
It is just as crucial that the AI tool be clean, simple, and intuitive, otherwise lawyers will simply not use it, said Chuck Rothman, director of e-discovery services at Wortzmans, now a division of McCarthy Tétrault. “In order for artificial intelligence to be really adopted in the legal industry it has to be presented in a way that lawyers can very quickly grasp what the system is saying so that they can use it because if they don’t understand it, they are not going to trust it and if they don’t trust they won’t use it.”
The drive towards AI, however incrementally, will likely also mean that law firms are going to have to review their traditional billing model, said Furlong. The time when law firms were the only game in town, where lawyers were the “only vehicle” by which legal services could be delivered is coming to a close, and AI is going to help to put that to an end, added Furlong. “All of these innovations like artificial intelligence are going to reduce the amount of time and amount of effort required to obtain a legal outcome so the very lax business model of selling time and expertise, rather than outcomes and results, is coming to an end,” said Furlong. McCarthy’s and Osler are paying attention to the evolving market demands. McCarthy’s is planning to have 50 per cent of its work charged on a non-hourly basis while Torys is moving towards a fixed fee billing model. “The model is changing as we incorporate these new technologies and because of the demands of the client,” said Nickerson.
In the meantime in-house counsel like Garcia are going to likely have to bid their time if they expect to bear witness to a monumental change thanks to AI. As Peters puts it: “For sure artificial intelligence is going to play a role in the future but not as soon and not in the way that a lot of people are imagining it now.”
This story was originally published in the magazine Canadian Lawyer.
Days after U.S. President Donald J. Trump issued a controversial executive order that barred refugees and temporarily suspended immigration from several predominantly Muslim countries, Amir Moravej and his team decided to lend a helping hand and launched an artificial intelligence immigration chatbot months ahead of schedule.
The sweeping executive order, since rescinded by the courts, led to global chaos as it barred many passengers from flights to the United States, including one of Moravej’s team members. “He had an interview scheduled but couldn’t go to the U.S.,” explained Moravej. “And there were other students who planned to continue their studies in the U.S. but because of the policy changes had to stay here. So we decided to accelerate the launch to help students who are currently in Quebec to get their permanent residency.”
The AI-driven chatbot uses machine learning to assist people through the complicated process of putting together an immigration application. Immigration into Canada and Quebec (which has different programs in place) is a laborious three-step process. Applicants must determine if they are eligible, then must provide supporting documents, and finally fill out an application form, which in itself can be tricky.
That’s where the web-based application at Botler.ai can come into play. It automatizes much of the process. After an applicant answers questions about their qualifications and circumstances, Botler assesses if they are eligible for the immigration program. If so, the applicant can then upload the documents which the AI tool reads and reviews. If all goes well, Botler automatically fills out the application form based on the information the applicant has provided.
Botler does more. If for whatever reason the applicant does not meet the immigration eligibility requirements, the AI tool can provide the applicant with “feedback and insights” and steps the candidate can take to take to become eligible, noted Moravej. And it learns and becomes “smarter” as it goes along because it uses deep learning, particularly for document reviews. The machine learns through recognized patterns based on the data it previously “saw,” explained Moravej. That is particularly useful as Botler has the potential of recognizing forged documents.
“There are two things the machine can learn,” explained the Iranian-born developer. “First of all, it learns the profile of the user such as his experience and his educational background – all these things the machine can understand. And the machine can understand the rules of immigration and can determine if you are eligible or not. All these things are basically a decision-making process, and computers are very good at making decisions because they can calculate way more possibilities than us as humans. And it will get smarter as it sees more immigration cases.”
Moravej, who developed Botler out of personal necessity, maintains that the chatbot will not replace lawyers. Indeed, Nonimo A&A Technologies, the nascent firm behind Botler, are working with Montreal law firm Campbell Cohen. Nonimo trains the machine, and the lawyers test it to ensure that Botler covers all cases and captures all of the exceptions.
“Botler can augment what lawyers are doing and make their lives easier as it automates many things that lawyers are doing manually right now,” Moravej told me. “As a result, lawyers can process and can represent more clients because many of the tasks that they have to do manually can be automated using Botler. At the end of the day, a lawyer needs to represent a client before the government so Botler can in no sense replace a lawyer.”
At present, Botler can handle only a single immigration program – the Programme de l’expérience québécoise (PEQ) for foreign workers and students residing in Quebec. As of the beginning of April, 1,752 applicants used Botler to assess their eligibility, and Moravej said that 438 applicants will either be eligible or will become eligible to apply for PEQ, if they can resolve minor issues with their cases. In the near future, Moravej intends to adapt the technology to encompass other federal and provincial immigration programs.
Across the Atlantic, a Stanford University student in Oxford, England Joshua Browder has embarked on a similar venture. The London-born developer and creator of DoNotPay, a chatbot that has overturned 160,000 parking fines in England, recently turned his sights on helping refugees claim asylum. The chatbot, which uses Facebook Messenger, helps refugees fill in immigration applications in the U.S. and Canada, and it helps those in the United Kingdom apply for asylum support. Like Botler, the chatbot asks applicants a series of questions to determine which application the refugee needs to fill out and assesses whether the refugee is eligible for asylum protection under international law.
Both Moravej’s and Browder’s chatbot are the latest examples of online AI-powered tools that can expedite access to justice, an issue that has befuddled the legal profession for decades. “These tools that are now coming online are such a great opportunity to unlock access to justice, which is such a prevalent need in our society,” said Matthew Peters, national innovation leader at McCarthy Tétrault LLP in Toronto. “You have this whole huge swath of people in the middle class and all sides who quite frankly have (been the subjects of) a disservice from our profession who have not provided proper access to justice. We should be focusing on how fast can we get some of these solutions out.”
Jin Ho Verdonschot, a justice technology architect at HiiL Innovating Justice too believes that AI holds much promise at providing greater opportunities for access to justice. “Artificial intelligence is a very good example of one of the many innovations now happening in the legal services world,” Verdonschot said at a conference held in Montreal last fall. “There are so many tools that (that are) emerging and being developed that will have real value and can really empower our citizens. And I think AI will have a place in that future.”