Nearly a decade after co-founding Cyberjustice Laboratory, a unique hub that analyses the impact of technologies on justice while developing concrete technological tools that are adapted to the reality of justice systems, Karim Benyekhlef and Fabien Gélinas have set their sights on artificial intelligence.
The Autonomy through Cyberjustice Technologies (ACT), the latest brainchild of the Cyberjustice Laboratory, is the largest international multidisciplinary research initiative that seeks to leverage artificial intelligence to increase access to justice while providing justice stakeholders with a roadmap to help them develop technology that is better adapted to justice.
“The main objective behind the initiative is to ensure that individuals know their rights, understand their legal situation regarding their problems and improve access to justice – and AI may help accomplish those goals,” said Benyekhlef, the head of Cyberjustice Laboratory and a law professor at the Université de Montréal. “There’s a good chance that our reflections and work on areas such as privacy, data management, data governance could easily be used in other realms such as in public administration. But we must be careful. We cannot play the sorcerer’s apprentice. These are tools that are not yet mature. There’s work to be done.”
A class action case launched in 2017 in Michigan reveals the perils of an AI powered software program gone awry, added Benyekhlef. The class action was filed after more than 40,000 unemployment claimants were accused of benefits fraud based on the results of an AI software, and had nearly US$100 million seized through tax refunds or garnished wages, according to the plaintiffs. The state’s auditor general found that 93 per cent of the charges were spurious. The consequences go beyond the class action. Poorly conceived or implemented AI can lead to concerns about the use of AI in the legal system, said Benyekhlef.
“Artificial intelligence that could be useful would be set aside because there would be little or no social acceptability,” remarked Benyekhlef. “These tools have to be reviewed before implementation, improved, and while perfecting them introduce built-in protections to ensure the respect of fundamental rights. That’s why we’re working with our partners, both in the government and the private sector, to build awareness of these kinds of problems.”
That’s where ACT comes into play. Over the next five years ACT will take an inventory of existing technology and canvass situations where AI is used in the justice system, evaluate its impact through case studies, develop a body of best practices, and establish a governance framework to ensure the fair use of AI in the justice system.
The burgeoning venture has culled together 50 researchers from around the world and 42 partners from business, industry, institutional, and community and social circles. The likes of the federal and Quebec justice ministries, the Courts Administration Service, the Community Legal Ontario, the Quebec Bar as well as giants like Microsoft have all joined forces as have a handful of law firms. The work conducted by ACT researchers and partners is intended to provide a better understanding of the practical, ethical and socio-legal issues arising from the integration of AI tools within the judicial system. It will also devote attention to the design and simulation of technological tools for conflict prevention and resolution.
All told there are 16 research projects that will examine a host of issues, many of which take direct aim at AI tools legal practioners may be using or are considering using. One project will analyze algorithmic tools that claim can predict the probable outcome of a trial. Another will assemble an inventory of common practices of AI tools that are used or under development by legal authorities such as police as well as administrative and tax authorities. Yet another will analyse existing traditional and technology-based mechanisms for debt recovery in France, Belgium and Québec and then design a smart contract, using the blockchain technology, to make the procedure more efficient and effective. There is also a project that will examine tools being developed to assist self-represented litigants and determine its effectiveness, relevance and satisfaction rates.
“We wanted to put in place projects that are closely aligned with market developments,” noted Gélinas, a law professor at McGill University who heads the Private Justice and the Rule of Law Research group. “Things are moving rapidly, and rather than stay in the ivory tower, we really want to work with stakeholders from the private and public sector who know what’s taking place on the ground.”
While ACT is focused on offering concrete solutions to judicial actors and proposing tangible recommendations to decision-makers so that they can develop informed public policies around the use of AI tools in the justice system, thorny challenges lay ahead. The technology is evolving so rapidly that it can be hard to keep up. Amy Salyzyn, a law professor at the University of Ottawa and the lead researcher of a project that will develop an inventory of policies and best practices of AI automatization, recalls that she tried to create a catalog of legal applications used by Canadians, and “as soon as you create the list, it was outdated the next week.” She would not be surprised if the same thing takes place while creating a typology of existing legal technologies that use automation and AI. “It’s always going to a point in time document but I think understanding the broad types of use is a helpful way forward,” said Salyzyn.
Data is the fuel that drives AI. “It’s the essential ingredient,” said Benyekhlef. “If you don’t have data, you cannot develop algorithms or predictive AI tools.” Companies that have made the most advances in AI happen to be technological behemoths like Amazon, Facebook and Google who are sitting on troves of data generated by their own applications and tools, said Benyekhlef. Since many AI technologies are proprietary “understanding what they are and who is using them can be a challenge,” remarked Salyzyn.
Developing public policies around the use of AI too will be a challenge, according to Gélinas. Coupling the development of AI technology with legal reasoning, the traditional concept of court decisions, and the use and role of judges are all among “the biggest challenges of research,” said Gélinas. Perhaps more so because to date “we still don’t completely understand how AI tools produce results,” noted Gélinas. “We don’t know how this tool exactly works and therefore the management of AI transparency and its link with legal reasoning is a great challenge.”
This story was originally published in The Lawyer’s Daily.