Panel 1 | Regulating AI in the Justice Sector: The Regulatory Reflex
On this page, you will find (click to jump):
Recording
Biographies
Chair
Nicolas Vermeys, LL. D. (Université de Montréal), LL. M. (Université de Montréal), CISSP, is the Director of the Centre de recherche en droit public (CRDP), the Associate Director of the Cyberjustice Laboratory, and a Professor at the Université de Montréal’s Faculté de droit.
Mr. Vermeys is a member of the Quebec Bar, as well as a certified information system security professional (CISSP) as recognized by (ISC)2, and is the author of numerous publications relating to the impact of technology on the law, including Droit codifié et nouvelles technologies : le Code civil (Yvon Blais, 2015), and Responsabilité civile et sécurité informationnelle (Yvon Blais, 2010).
Mr. Vermeys’ research focuses on legal issues pertaining to artificial intelligence, information security, developments in the field of cyberjustice, and other questions relating to the impact of technological innovations on the law. He is often invited to speak on these topics by the media, and regularly lectures for judges, lawyers, professional orders, and government organizations, in Canada and abroad.
Panelists
Arnaud Latil is a senior lecturer at Sorbonne University, a researcher at the SND Center (Sciences, Norms, Democracy, UMR 8011) and a member of the SCAI (Sorbonne Cluster for AI). He is the author of Le droit du numérique, une approche par les risques (Digital Law: A Risk-Based Approach), Dalloz, 2nd ed., 2024). He is an expert advisor to the European AI Office (European Commission).
Bertrand Gervais, B.A.A., LL.B., has been a member of the Quebec Bar since 2003. Since 2015, he has held the position of Director of the Registry and Registrar of Appeals at the Montreal headquarters of the Quebec Court of Appeal. He oversees the planning, coordination, and continuous improvement of registry services, while liaising between the judiciary and court staff. He also works on the Quebec Ministry of Justice’s Lexius project, which aims to establish a fully electronic court system.
Mr. Gervais holds a bachelor’s degree in law from the University of Montreal and a bachelor’s degree in business administration from HEC Montreal. He has also held various coordination and management positions within the Court of Appeal, including Acting Director General, Research Service Coordinator, and Research Lawyer. Committed to the legal community, he is a member of several committees of the Montreal Bar Association.
Amy Salyzyn is a Professor at the Faculty of Law, Common Law Section at the University of Ottawa. Amy is an expert in the area of legal ethics, lawyer regulation, the use of technology in the delivery of legal services and access to justice. At the University of Ottawa, she teaches Torts as well as upper year seminars in legal ethics and the use AI in the legal profession. Amy is called to the bar in Ontario and is currently the Board Chair of the Canadian Association for Legal Ethics.
Before coming to the University of Ottawa, Amy served as a judicial law clerk at the Court of Appeal for Ontario and practiced at a Toronto litigation boutique. Her litigation practice included a wide variety of civil and commercial litigation matters including breach of contract, tort, professional negligence, securities litigation and employment law as well as administrative law matters. Amy received her J.S.D. and LL.M. from Yale Law School and her J.D. from the University of Toronto Law School, where she was awarded the Dean’s Key upon graduation.
Aimé Toumelin is CEO and co-founder of Hadaly. Hadaly is reshaping mergers and acquisitions for small and mid-sized businesses. In 2025 alone, the platform is powering over 150 active business processes, helping advisors, brokers, and entrepreneurs streamline due diligence and deal execution. To date, Hadaly has already supported transactions totaling more than $500 million.
By leveraging artificial intelligence, Hadaly automates the structuring of financial and legal documents, organizes data rooms, and generates high-value outputs such as valuations reports, Confidential Information Memorandums (CIMs), and risk dashboards. These capabilities reduce costs, accelerate timelines, and improve decision-making in transactions.
Hadaly’s mission is to democratize access to advanced M&A tools, enabling faster, more transparent, and more efficient transactions that empower sellers, buyers, and advisors across the SME ecosystem.
Summary
Building on the conceptual foundations established during Professor Mireille Hildebrandt's presentation, the conference moved to a more regulatory and institutional perspective with the first panel, moderated by Professor Nicolas Vermeys, Deputy Director of the Cyberjustice Laboratory. The discussion explored how artificial intelligence is reshaping regulation, governance, and professional practice in the justice sector, bringing together perspectives from Europe, Canada, and industry, and highlighting the tension between innovation, ethics, and accountability.
The conversation began with Professor Arnaud Latil, who analyzed the European Union’s regulatory approach to artificial intelligence, which he described as a new form of technical deliberative democracy. For him, the European AI Act represents not only a legal instrument but also a political experiment, an attempt to ensure that technological development is never detached from public deliberation.
Latil detailed three pillars underpinning this model. The first is industrial, reinforcing European sovereignty through strategic investment in data infrastructure and computing capacity. The second is deliberative, involving stakeholders through standards, codes of practice, and regulatory sandboxes, which enable adaptive governance while avoiding rigid overregulation. The third is compliance-based, translating the risk-based logic of the AI Act into concrete obligations that scale with the potential harm of the system.
By blending co-regulation with industrial strategy, the EU is, according to Latil, pursuing a difficult equilibrium, one that seeks to make innovation a democratic process rather than a purely technical or market-driven one.
The discussion then turned to Bertrand Gervais, Director of the Registry and Register of Appeals at the Quebec Court of Appeal, who provided a judicial perspective on the challenges raised by AI in Canadian courts. He outlined a continuum of responses currently emerging across the country.
At one end are voluntary declarations, such as those introduced by the Federal Court, asking litigants to disclose any use of AI in the preparation of their submissions. Compliance, however, remains limited. Gervais noted that between December 2023 and October 2024, the Federal Court received only two such declarations out of approximately 20,000 filings. In the middle ground, some courts issue cautionary directives, warning against the use of AI-generated arguments or citations without proper verification. Finally, the Quebec Court of Appeal, which Justice Gervais represents, has opted for a more institutionalized measure, requiring all legal authorities cited in briefs to include hyperlinks to verified databases, effectively preventing the submission of fabricated judgments.
Gervais insisted that these initiatives do not seek to ban AI but to preserve procedural integrity. His main concern lies with self-represented litigants, increasingly reliant on generative tools yet lacking the legal literacy to detect AI errors or “hallucinations.” In his words, “access to justice cannot come at the expense of truth.”
Continuing the discussion, Professor Amy F. Salyzyn examined how professional regulation is adapting to the ethical implications of AI in legal practice. She reminded the audience that AI has long been part of legal workflows, notably in e-discovery and due diligence, but that the advent of generative AI such as ChatGPT has fundamentally changed the equation, both in scale and visibility.
While most bar associations and law societies have responded by issuing guidance statements, these often remain general and declarative. The repeated injunction to “keep a human in the loop,” she argued, risks becoming a rhetorical safeguard if regulators fail to define what meaningful human oversight entails.
Salyzyn emphasized three main challenges for regulators moving forward. The first concerns clarifying what “human oversight” means in operational terms, whether it involves verification, supervision, or co-authorship. The second relates to ensuring ethical independence in tools developed by private companies whose algorithms remain opaque. Finally, the third involves addressing public-facing AI systems that provide legal advice to non-lawyers, an emerging space beyond the reach of traditional regulation.
Her intervention highlighted a central tension, noting that while AI promises efficiency and democratization, professional responsibility must evolve to preserve competence, confidentiality, and trust.
The discussion then shifted toward the industry perspective with Aimé Toumelin, co-founder of Hadaly, a Canadian legal-tech company specializing in corporate transactions. He shared a practical vision rooted in experimentation, explaining how his company develops AI tools to assist lawyers in transactional and contractual work, particularly during due diligence.
Toumelin acknowledged both the promise and the pitfalls of these systems. In controlled environments, AI accelerates document review and can uncover overlooked inconsistencies. Yet it also produces false positives and unreliable interpretations. For him, this duality illustrates a fundamental truth that “AI is only as good as the data and the human who guides it.
He identified three essential safeguards for trustworthy legal AI. The first is transparency, meaning that users must understand how each result was generated and be able to trace its reasoning. The second is data security, which requires that all client information remain encrypted and locally stored. The third is human validation, since automation without human oversight is still premature and potentially dangerous.
His closing message echoed that of the panel as a whole, emphasizing that reliability and accountability are not barriers to innovation but conditions for its legitimacy.
Summary written by Maryam Akhlaghi.
Photos





This content has been updated on 7 November 2025 at 10 h 40 min.

