Quebec Superior Court has given the green light to a sandboxed pilot project that allows some twenty judges to use artificial intelligence to help them with documentary and legislative research, translations, and draft judgements, but the “avant-garde” endeavour draws the line at decision-making or deliberative undertakings, a conservative approach that has earned plaudits from legal observers.
“The advent of generative artificial intelligence tools represents a major transformation of the information and documentation landscape,” said Paul-Jean Charest, a spokesperson with Quebec Courts. “The Court deemed it preferable to address this development proactively and in a managed manner, rather than being subjected to it.”
The initiative, a surprise to even those in the Quebec legal community immersed in all things AI, is the latest effort by courts and tribunals around the world to turn to AI to address chronic problems like increasing case backlogs, rising workloads and a dearth of skilled court employees in the hope it will speed access to justice and improve administrative efficiency.
“If we weigh up the expected benefits against the risks in any given situation, I think it’s something that should go ahead because artificial intelligence is present in the legal field,” said Elhadji Niang, a Quebec City lawyer with Bouchard+Avocats who specializes in legal and regulatory compliance in information technology. “I am convinced that technology can serve the judiciary, and not the other way around. That being said, as with any approach, it must be supervised,” added Niang, a member of the Quebec Bar’s AI committee and this year’s recipient of the Quebec Bar Medal.
There is little doubt that AI and GenAI is increasingly being embraced by judicial systems. A UNESCO survey found that approximately 44 per cent of judicial operators — including judges, prosecutors, and civil servants in legal administration working in 96 countries — use AI tools for work, and 41 per cent use AI chatbots. In many countries, AI is reshaping the justice system, points out a recent report by Organisation for Economic Co-operation and Development (OECD). The French Court of Cassation, the highest court in the French judiciary, developed an AI system that categorizes and channels incoming appeals and petitions to the appropriate chambers based on training from past cases. In Brazil, the AI system automates the examination of appeals to the Supreme Court by identifying cases with “general repercussions,” while China has put in place a centralized so-called “smart system” that integrates big data analytics, algorithmic software and AI into a single platform to help, among other things, judges with case-management.
The Quebec Superior Court AI’s pilot project is by no means giving decision-making power to AI, noted Nicolas Vermeys, a Université de Montréal law professor and director of the Public Law Research Centre. “What they want to put in place is more or less a search engine on steroids,” said Vermeys, who is also the associate director of the Cyberjustice Laboratory, a unique hub that analyzes the impact of technologies on justice and develops concrete technological tools adapted to the reality of justice systems. “So we’re not talking about tools that will make decisions or interpret texts and push information to them. It’s a tool that will really identify, in light of the guidelines given by the judge, the texts that meet the identified criteria.”
At least ten AI chatbots were developed internally that will be integrated into Microsoft’s Copilot Pro, a generative AI assistant. Each of the AI chatbots specialize in a field of law, such the Civil Code of Quebec, and were trained only with legal texts and internally produced documents. The AI chatbots will be tested for documentary research or quickly finding passages from texts or public decisions. It has also been trained to assist with legislative research such as finding the version of a section of legislation in force on a given date, and to help with translation and terminology, particularly between French and English. It will also provide editorial support to improve the clarity of wording, standardize tone or generate simple normative passages like procedural history, said Charest.
Guideline
At the same time the pilot project was launched, the Superior Court publicly issued a nine-page guideline, entitled Governance framework for artificial intelligence, that sets the principles that will steer its use of AI. The guidelines attempt to strike a balance between its use of AI against the significant impact the use of the flourishing technology may have on judicial independence and public confidence in the administration of justice. The framework takes into account a series of guidelines and recommendations issued by the Canadian Judicial Council such as ethical principles for judges, guidelines for the use of AI in Canadian courts, and its blueprint for the security of court information.
“The Superior Court is showing leadership not only by being at the forefront, but also by setting an example,” said Université de Montréal law professor Shana Chaffai-Parent, an expert of civil procedure and the judiciary. “Since these tools exist, if we demonize them and reject their use, people, whether lawyers, the public or judges, may use them secretly or misuse them. So, with a pilot project like this, it allows them to display the values that should be adopted not only by the courts, but also by the practices and individuals that go before the courts.”
The guideline plainly stipulates that AI must be used with caution and discernment, and generated content must be “rigorously verified” by users, be it judges and lawyers supporting them, due to the risk of errors, inaccurate statements or hallucinations. The AI chatbots will not be allowed to provide assistance to judges in their decision-making and “intellectual process,” replace judges in decision-making, generate draft judgments in whole or in part, interpret texts, summarize lengthy text or provide legal opinions, stresses the guidelines. These restrictions are built into the configuration of chatbots, which will automatically reject requests that fall outside the authorized scope, says the guideline.
“Judges have specified their expectations of lawyers when they use AI,” remarked Niang. “So, the reverse is true: we cannot set rules for judicial officers and expect magistrates, who are supposed to administer justice, to be uninterested in how they themselves should appropriate these tools.”
Quality control
The pilot project, which will run for several months, may be expanded depending on results, which will be based on both on qualitative and quantitative indicators, said Charest. The Superior Court, however, faces “several constraints” in its bid to gauge results, added Charest. It cannot monitor judges’ requests and the responses generated by AI chatbots, for reasons of judicial independence and to preserve the secrecy of deliberations, said Charest.
But it will measure elements such as the perceived time required to complete certain tasks before and after the introduction of the AI tools, and the satisfaction and confidence of judges have over the results provided by the tool, both of which will be based on self-reporting. The accuracy and reliability of the content – such as the absence of errors, the accuracy of citations and respect for judicial tone — generated by the AI chatbot will too be measured by tests and self-reporting. The occurrence of so-called incidents like errors, hallucinations, and inconsistencies will also be assessed. So too will be the frequency and the average duration of interactions. “This data will be compiled by the Court’s management, without access to judicial content, in order to draw conclusions,” said Charest.
“A fundamental point will be to look at the quality control of the results that come out of using the tool itself,” said Chaffai-Parent. “How easy or feasible is it to cross-check the results produced by AI, and how much time does this take? If the time spent cross-checking is similar to the time spent performing the task, is there really any time saving? It will also be fundamental to see what the points of contact or friction are, if any, with the most important values that are likely to be impacted by the justice system, particularly judicial independence.”
Questions linger
Legal actors, while applauding the initiative, still have more questions, and in some cases reservations. Former Court of Quebec judge Nicole Gibeault, who sat on the bench for 22 years, took some convincing before recognizing that “we can work with this (AI) tool but we have to be extremely careful.” The use of chatbots to help translate or aid with drafting concerns Gibeault. The use of clear and precise language is fundamental in judgments. “Not only can you end up with translations that, in my opinion, can be completely off the mark, but you can also end up with translations that do not represent what the person or judge means, because there are subtle nuances in what the court means to impart,” said Gibeault. “So, I’m not sure that it won’t take twice as much work to check its accuracy.”
There is another thorny issue the pilot project faces, noted Vermeys. The court is trying to implement a series of tools that will enable information to be located so that it is quickly in the hands of a judge, explained Vermeys. But there is a “risk” that AI chatbot will refer to some documents, and not others. “So, the judges will still have to do some work to ensure that the research was done properly,” said Vermeys.
Security issues are also uppermost in the minds of legal pundits as is the fact that the AI chatbots used by the pilot project will be integrated into Microsoft’s Copilot Pro. The Canadian Judicial Council (CJC) points out in its guidelines for the use of AI in Canadian courts that the integration of AI tools in a court brings “unique” information security challenges. The Superior Court declares in its guidelines that the use of AI is carried out within a secure framework that ensures the confidentiality of the data “that will pass through these tools.” The guidelines also underline that all data transmitted by users is encrypted and hosted exclusively in Canada, in secure sandbox environments designed to prevent unauthorized external access and with which users must voluntarily interact.
“We must not wait until we are facing an iceberg like the Titanic,” said Niang. “I hope that the Superior Court has taken the time to define guidelines on when to intervene to prevent the pilot project from moving in a direction that is not at all desirable in terms of data security and expected results, so that appropriate measures can be taken before moving forward.”
The CJC also warns about “too much reliance” on proprietary AI as it could compromise judicial independence. That is a legitimate concern, said Vermeys. Companies can sometimes change software policies that affect data privacy, and raise AI training concerns. Last year the software company Adobe became immersed in a high-profile controversy that incited user backlash after it “updated” its terms of service for a range of its products. “If a tool evolves in a certain way, or if the service is acquired by another company, we need to make sure we can change if the new company’s policy or approach does not fit with that of the court,” said Vermeys. “We must be very careful to acquire tools that do not limit us.”
The dependence on technology providers is an authentic and valid preoccupation, said Niang. “I dare to hope that, at the contractual level, the judiciary has put in place clear provisions regarding the use of Microsoft Copilot Pro by the judiciary,” said Niang. “Because if that is not the case, then there is indeed cause for concern.”
The Court of Quebec is part of a subcommittee on AI that is chaired by Associate Chief Justice Benoit Sabourin. The subcommittee is mandated to study issues related to AI and make recommendations to ensure its safe and ethical use. “We remain attentive to developments in practices and initiatives at other institutions, particularly in the areas of collaboration and expertise sharing,” said Lucie Demers, Executive Assistant to Court of Quebec Chief Justice Henri Richard.
The Federal Court issued an interim principles and guidelines on the court’s use of AI in late September. It categorically states that the court will not use AI, “and more specifically automated decision-making tools,” in making its judgments and orders, “without first engaging in public consultations.” AI tools have been used since March 2025 to assist language specialists who translate Federal Court decisions. “The AI tools are complementary to the translation – they do not replace the human doing the work,” according to the interim principles and guidelines.
Small steps are best for this type of technology, said Chaffai-Parent. They have to start somewhere, because otherwise the court will be overwhelmed before it has even had a chance to try them out, she added. “If we want to promote transparent use and disclosure of AI tools among lawyers, the courts must do the same,” said Chaffai-Parent. “This pilot project, through serious reflection, makes that possible. That is what I am most pleased about.”
Former Court of Quebec judge Gibeault too believes that the courts should proceed slowly and carefully. “We must see short, medium- and long-term results and adjust accordingly,” said Gibeault. “I dare hope that human adjudication will never be replaced by robots, whether AI or anyone else.”
RELATED:
- French-language law faculties grappling with new breed of generative AI tools
- AI initiative seeks to improve access to justice
- Quebec justice system in crisis
- High bar for use of biometric systems maintained by Quebec privacy regulator
- Montreal AI chatbot helps people immigrate to Quebec
This story was originally published in Law360 Canada.

Leave a Reply