Chief Justices call for modernization of court system

The chief justices of four courts, addressing hundreds of judges and lawyers in person at the Montreal courthouse for the first time since the onset of the pandemic, broadly outlined their priorities and concerns at the Quebec’s opening of the courts ceremony, from the promise and pitfalls of technology to modernize the justice system to the debilitating impact of chronic underfinancing to the erosion of decorum in the courtroom and the pernicious effects of disparaging social media comments.

The chief justices, faced with no choice but to implement technological innovations at breakneck speed after COVID-19 struck in March 2020 in order to arrest the temporary paralysis of the justice system, now warn that while technological modernization of courts is inevitable and necessary, it is not the panacea that will resolve the host of challenges confronting the justice system.

“The digitization of the courts will not solve all the problems we face, and it may even raise new ones, but it is a step in the right direction,” remarked Quebec Court of Appeal Chief Justice Manon Savard who underlined that the appellate court is working “intensely” with the provincial Ministry of Justice to to establish a digital Court of Appeal within the next two years.

“This movement is irreversible. Society as a whole is increasingly turning to digital processes, in all sectors of activity. Courts must keep pace. In order to maintain or even improve the efficiency of courts in a post-pandemic context, the implementation of a reform focused on the use of technology will certainly be part of the solution,” said Chief Justice Savard in the summit entitled “Building the Future.”

Continue reading “Chief Justices call for modernization of court system”

Legal profession concerned about algorithmic bias

Algorithms, the set of instructions computers use to carry out a task, have become an integral part of everyday lives, and it is immersing itself in law. In the U.S. judges in some states can use algorithms as part of the sentencing process. Many law enforcement officials in the U.S. are using them to predict when and where crimes are likely to occur. They have been used for years in law firm recruitment. And with advancements in machine learning they are also being used to conduct legal research, predict legal outcomes, and to find out which lawyers win before which judges.

Most algorithms are created with good intentions but questions have surfaced over algorithmic bias at job hunting web sites, credit reporting bureaus, social media sites and even the criminal justice system where sentencing and parole decisions appear to be biased against African Americans.

And the issue is likely to gain traction as machine learning and predictive coding become more sophisticated, particularly since with deep learning (which learn autonomously) algorithms can reach a point where humans can often no longer explain or understand them, said Nicolas Vermeys, the assistant director at Cyberjustice Laboratory in Montreal.

AlphaGO is a case in point. When AlphaGO, Google’s artificial intelligence system, defeated the 18-time world champion in the complex and highly intuitively game of the ancient Chinese board game GO, it was not just a demonstration of yet another computer beating a human at a game. GO, a game with simple rules but profound complexity, has more possible positions than there are atoms in the universe, leading some to describe it as the Holy Grail of AI gaming. It was a remarkable feat because AlphaGO was not taught how to play Go. It learned how to play, and win, by playing millions of games, using a form of AI called deep learning, which utilizes neural networks that allow computer programs to learn just like humans. More than that, the victory showed that computers are now able to rely on its own intuition, something that was thought only humans could do.

Another example is Deep Patient. The brainchild of a research group at Mount Sinai Hospital in New York, it is a machine learning tool that was trained to detect illness from data from approximately 700,000 patients. Deep Patient turns out to be good at detecting hidden patterns in the hospital data that indicate when people are becoming ill. It also appears to be really good at anticipating the onset of schizophrenia, a very difficult disease for physicians to predict. But the people behind Deep Patient do not yet understand why Deep Patient seems to be good at predicting schizophrenia and do not understand how it works.

“We have no idea how algorithms arrived at their decision and therefore cannot evaluate whether the decision has value or not,” said Vermeys, whose research institution is studying the issue of algorithmic bias. “There is a risk to relying completely on machines without necessarily understanding its reasoning.”

No human is completely objective, and so it is with algorithms as they have been programmed by programmers, noted Ian Kerr, a law professor at the University of Ottawa and the Canada Research Chair in Ethics, Law and Technology. Programmers operate on certain premises and presumptions that are not tested by anybody else which leads to results based on those premises and presumptions which in turn gives rise to bias, added Kerr.

On top of that it is very difficult to challenge such decisions because “whoever owns the algorithms has trade secrets, isn’t likely to show you the source code, isn’t likely to want to talk about the secret source and what makes the algorithm work,” said Kerr. “What justifies the algorithm is its success or perceived success which is very different from whether or not it operates in biased ways.”

Aaron Courville, a professor with the Montreal Institute for Learning Algorithms, shares those concerns. “We are really in a phase where these algorithms are starting to do interesting things, and we need to take seriously the issues of responsibility,” said Courville.

Europe is taking a serious look at these issues. Under the European Union’s new General Data Protection Regulation (GDPR), automated individual decision-making that “significantly affect” users will be restricted, argue Bryce Goodman of the Oxford Internet Institute and Seth Flaxman of the University of Oxford’s Department of Statistics in a paper. Expected to be in force in 2018, the GDPR will also effectively create a “right to explanation,” according to the authors. In other words, users can ask for an explanation of algorithmic decision that was made about them.

“This is where Europe and the U.S. go wild in their disagreements,” explained Kerr, who has also written about the issue of a right to explanation. “Europe starts with this principled approach that makes sense. If a decision is about me and it has sort of impacts on my life chances and opportunities, I should be able to understand how that decision was made. It invokes large due process concerns.

“The due process idea is that no important decision should be made about me without my own ability to participate. I have a right to a hearing. I have a right to ask questions. So all of these kinds of rights are kind of bound up in this notion of the duty to an explanation. And the hard thing is that an algorithm isn’t in the habit of explaining itself, which means that if that kind of law prevails then people who use algorithms and design algorithms will have to be a lot more forthcoming about the mechanisms behind the algorithm.”


Further reading:


Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks by ProPublica, an American independent, nonprofit news organization.

Chief Justice John Roberts is a Robot by University of Ottawa law professor Ian Kerr.



And for the technologically-inclined:
Mastering the Game of Go with Deep Neural Networks and Tree Search by David Silver, the lead researcher on the AlphaGo project.

Artificial intelligence: Law firms are a hard sell

Fernando Garcia is looking forward to the day when he can get his hands on Beagle, an automated contract analysis system powered by artificial intelligence that reads contracts in seconds, highlights key information visually with easy-to-read graphs and charts, and gets “smarter” with each reviewed contract. Also on his bucket list is an offering by yet another Canadian legal tech start-up, Blue J Legal, that too uses AI to scan legal documents, cases files and decisions to predict how courts will rule in tax decisions. At a time when the majority of in-house counsel are under intense pressure to shave costs and run a lean team, such powerful tools are a godsend. “There’s always that pressure to do more with less so when a tool comes along that can provide more efficiency, more risk mitigation, and can let you do your job better and focus on providing value-added, it is a strategic advantage,” noted Garcia, general counsel, government affairs and corporate secretary with Nissan Canada Inc. “It’s going to fundamentally change our job.”

Continue reading “Artificial intelligence: Law firms are a hard sell”

Montreal AI chatbot helps people immigrate to Quebec

Days after U.S. President Donald J. Trump issued a controversial executive order that barred refugees and temporarily suspended immigration from several predominantly Muslim countries, Amir Moravej and his team decided to lend a helping hand and launched an artificial intelligence immigration chatbot months ahead of schedule.

The sweeping executive order, since rescinded by the courts, led to global chaos as it barred many passengers from flights to the United States, including one of Moravej’s team members. “He had an interview scheduled but couldn’t go to the U.S.,” explained Moravej. “And there were other students who planned to continue their studies in the U.S. but because of the policy changes had to stay here. So we decided to accelerate the launch to help students who are currently in Quebec to get their permanent residency.”

The AI-driven chatbot uses machine learning to assist people through the complicated process of putting together an immigration application. Immigration into Canada and Quebec (which has different programs in place) is a laborious three-step process. Applicants must determine if they are eligible, then must provide supporting documents, and finally fill out an application form, which in itself can be tricky.

That’s where the web-based application at can come into play. It automatizes much of the process. After an applicant answers questions about their qualifications and circumstances, Botler assesses if they are eligible for the immigration program. If so, the applicant can then upload the documents which the AI tool reads and reviews. If all goes well, Botler automatically fills out the application form based on the information the applicant has provided.

Botler does more. If for whatever reason the applicant does not meet the immigration eligibility requirements, the AI tool can provide the applicant with “feedback and insights” and steps the candidate can take to take to become eligible, noted Moravej. And it learns and becomes “smarter” as it goes along because it uses deep learning, particularly for document reviews. The machine learns through recognized patterns based on the data it previously “saw,” explained Moravej. That is particularly useful as Botler has the potential of recognizing forged documents.

“There are two things the machine can learn,” explained the Iranian-born developer. “First of all, it learns the profile of the user such as his experience and his educational background – all these things the machine can understand. And the machine can understand the rules of immigration and can determine if you are eligible or not. All these things are basically a decision-making process, and computers are very good at making decisions because they can calculate way more possibilities than us as humans. And it will get smarter as it sees more immigration cases.”

Moravej, who developed Botler out of personal necessity, maintains that the chatbot will not replace lawyers. Indeed, Nonimo A&A Technologies, the nascent firm behind Botler, are working with Montreal law firm Campbell Cohen. Nonimo trains the machine, and the lawyers test it to ensure that Botler covers all cases and captures all of the exceptions.

“Botler can augment what lawyers are doing and make their lives easier as it automates many things that lawyers are doing manually right now,” Moravej told me. “As a result, lawyers can process and can represent more clients because many of the tasks that they have to do manually can be automated using Botler. At the end of the day, a lawyer needs to represent a client before the government so Botler can in no sense replace a lawyer.”

At present, Botler can handle only a single immigration program – the Programme de l’expérience québécoise (PEQ) for foreign workers and students residing in Quebec. As of the beginning of April, 1,752 applicants used Botler to assess their eligibility, and Moravej said that 438 applicants will either be eligible or will become eligible to apply for PEQ, if they can resolve minor issues with their cases. In the near future, Moravej intends to adapt the technology to encompass other federal and provincial immigration programs.

Across the Atlantic, a Stanford University student in Oxford, England Joshua Browder has embarked on a similar venture. The London-born developer and creator of DoNotPay, a chatbot that has overturned 160,000 parking fines in England, recently turned his sights on helping refugees claim asylum. The chatbot, which uses Facebook Messenger, helps refugees fill in immigration applications in the U.S. and Canada, and it helps those in the United Kingdom apply for asylum support. Like Botler, the chatbot asks applicants a series of questions to determine which application the refugee needs to fill out and assesses whether the refugee is eligible for asylum protection under international law.

Both Moravej’s and Browder’s chatbot are the latest examples of online AI-powered tools that can expedite access to justice, an issue that has befuddled the legal profession for decades. “These tools that are now coming online are such a great opportunity to unlock access to justice, which is such a prevalent need in our society,” said Matthew Peters, national innovation leader at McCarthy Tétrault LLP in Toronto. “You have this whole huge swath of people in the middle class and all sides who quite frankly have (been the subjects of) a disservice from our profession who have not provided proper access to justice. We should be focusing on how fast can we get some of these solutions out.”

Jin Ho Verdonschot, a justice technology architect at HiiL Innovating Justice too believes that AI holds much promise at providing greater opportunities for access to justice. “Artificial intelligence is a very good example of one of the many innovations now happening in the legal services world,” Verdonschot said at a conference held in Montreal last fall. “There are so many tools that (that are) emerging and being developed that will have real value and can really empower our citizens. And I think AI will have a place in that future.”