Panel 2 | Experimenting AI Technologies to Enhance Judicial Actors’ Autonomy

On this page, you will find (click to jump):


Recording




Biographies

CHAIR

Speaker Photo

Jane Bailey

Jane Bailey, Full Professor of Law at uOttawa, teaches Cyberfeminism, Technoprudence and Contracts. Her research focuses on technology and human rights, including technology facilitated gender-based violence.  She is a co-investigator and co-Team Leader for The Autonomy through Cyberjustice Technologies Project, centred at the Université de Montréal. Together with Dr Jacquelyn Burkell she co-leads The Rethinking Consent Project, a four-year initiative that explores challenges to the individual consent model posed by technologies such as AI and forensic genetics, seeking just publicly accountable alternatives.  Jane is also a Research Fellow at the Centre for Protecting Women Online at The Open University in England, and a Faculty Member of the Centre for Law, Technology & Society at the University of Ottawa.

PanELISTS

Speaker Photo

Leah Wing

Dr. Leah Wing is Director, National Center for Technology and Dispute Resolution (NCTDR) and Senior Lecturer II, Department of Legal Studies Program, University of Massachusetts Amherst, USA. Co-founder and Vice President, Board of Directors of the International Council for Online Dispute Resolution (ICODR), Leah co-led the development of the NCTDR-ICODR ODR Standards that were adopted by the International Organization for Standardization for its ODR standard: ISO 32122 (2025). Leah served as a researcher on early experiments in online dispute resolution and her present research projects focus on ODR ethics and AI. She recently published Wing, L., Draper, C. Cooper, S., and Rainey, D. Governing Artificial Intelligence. Leiden, The Netherlands: Brill Nijhoff, 2025. Leah serves on the editorial boards of the International Journal of Online Dispute Resolution and Conflict Resolution Quarterly and as a trainer and consultant, she has worked with hundreds of agencies and organizations both in the U.S. and internationally.

Speaker Photo

Erik Bornmann

Erik Bornmann is the Director, Guided Pathways at CLEO (Community Legal Education Ontario / Éducation juridique communautaire Ontario), where he leads the development of interactive legal tools that help people navigate legal processes, including completing court and tribunal forms. CLEO’s Guided Pathways support users in areas such as family law, small claims, housing, social security appeals, and immigration. Erik also oversees CLEO’s legal tech research, including a Law Foundation of Ontario–funded project exploring the responsible use of AI to enhance these tools. His work includes collaborations with Canadian universities though Autonomy through Cyberjustice Technologies (ACT) Project and otherwise to evaluate the effectiveness of direct-to-public legal applications. Before joining CLEO, Erik practiced litigation at a community legal clinic, where he appeared before courts and tribunals and led a digital innovation initiative aimed at expanding service capacity across Ontario’s legal clinic system.

Speaker Photo

Dominique Boullier

Professor of Sociology at Sciences Po. Doctorate in Sociology from the School for Advanced Studies in the Social Sciences (EHESS, 1987), degree in Linguistics (Rennes 2, 1991), HDR in Information and Communication Sciences (Bordeaux 3, 1995). He was a university professor at the University of Technology of Compiègne (UTC) and director of the Costech research unit (1997-2005). He was also the founder and director of the CNRS LUTIN User Lab (Laboratory for Digital Information Technology Uses) joint services unit at the Cité des Sciences et de l'Industrie de la Villette (2004-2007) and director of the Laboratory of Anthropology and Sociology (LAS) at the University of Rennes 2 (2005-2008). He is often described as a “research entrepreneur” due to his past experience as a business leader, lab creator, leader of multi-partner projects (ANR or European) and partner of companies in his research projects. He is executive director of the Idefi Forccast project (training in the analysis of science and technology through controversy mapping) (2012-2019) and head of digital education at Sciences Po (with Pascale Leclercq).

Speaker Photo

Hannes Westermann

Hannes Westermann is Assistant Professor of Law and Artificial Intelligence at Maastricht University and the Maastricht Law and Tech Lab. His research focuses on the use of AI and generative AI to improve access to justice. He obtained his PhD from the Université de Montréal and the Cyberjustice Laboratory, where he developed JusticeBot, an innovative online platform that provides legal information to laypeople using AI. The platform has been used over 40,000 times. He has presented his research findings at international conferences and was awarded the “Best Paper Award” at JURIX 2020 and 2023. His work has also been featured in the Financial Times and Context.news.


Summary

On October 15th, 2025, the ACT Conference resumed after lunch with the introduction of the next segment, titled “Experimenting AI Technologies to Enhance Judicial Actors’ Autonomy”, by Jane Bailey, full professor of Law at the University of Ottawa and chairperson of the panel.

The panel also included Leah Wing, Director at the National Center for Technology and Dispute Resolution (NCTDR), Senior Lecturer II at the University of Massachusetts Amherst, and Co-Founder and Vice-President of the International Council for Online Dispute Resolution (ICODR); Dominique Boullier, professor of Sociology at Sciences Po Paris; Hannes Westermann, assistant professor of Law and Artificial Intelligence at Maastricht University and the Maastricht Law and Tech Lab; and Erik Bornmann, Director of Guided Pathways at CLEO (Community Legal Education Ontario).

 

AI & Dispute Resolution: Online Dispute Resolution Standards

Leah Wing opened the discussion by highlighting that AI may greaten opportunities for access to justice, but it may also increase risks. For this reason, the elaboration of ODR standards with respect to AI has become pressingly necessary.

She argued that ODR standards incorporating AI must serve as guardrails—not aspirational guidelines but minimum requirements to ensure ethical and equitable use. AI, she noted, can enhance efficiency, reduce costs, expand communication channels, and even detect disputes before they escalate. These benefits illustrate the positive disruption technology can bring to law and society, but they also underscore the need for governance mechanisms that channel AI toward the public good.

Dr. Wing stressed that standards alone are insufficient; they must be supported by legislation, regulation, and profession-specific guidelines to form a full ecosystem of AI governance. She encouraged collaboration across disciplines and jurisdictions, pointing to examples like the Quebec Court of Appeal as models of implementation. Her central message was clear: AI can transform dispute resolution for the public good, but only if guided by robust, enforceable, and transparent standards.

She also briefly presented the book she co-authored that was recently published titled “Governing Artificial Intelligence”. The book addresses multiple aspects of AI regulation and explores different approaches to achieving effective governance, including the importance of international and professional standards. The authors argue that effective AI governance requires an entire ecosystem; hence recognizing that regulation, legislation, or standards alone are insufficient.

 

CLEO Guided Pathways: Using AI to Help Self-Represented Litigants Tell Their Stories

Erik Bornmann briefed the audience about CLEO’s Guided Pathways project on guided pathways which aims to help self-represented litigants understand their legal rights, complete court forms, draft legal documents and identify next steps in filing with a court. They currently offer over 103 pathways in English and 73 in French, covering seven areas of the law.

With the rise of artificial intelligence, CLEO’s team has become increasingly interested in leveraging this technology to enhance the pathways and address what the speaker called the “decision tree paradox”. This paradox reflects the tension between the need to gather all the information required to make out a claim and the need to simplify the storytelling process for users. This issue has been at the core of a three-year research, which resulted in a paper published online last year.

Mr. Bornmann then demonstrated a guided pathway in which generative AI has been implemented. These AI-assisted pathways are not yet available to the public.

The AI can generate a narrative more closely aligned with the legal test using the information collected from the user. Features such as targeted suggestions help users fill in missing details, while highlight reviews surface AI inferences for human oversight. These tools preserve user agency and reduce unnoticed errors, though challenges remain with AI variability, occasional incorrect inferences, and user mistakes in decision-tree inputs.

A major focus was on privacy and data handling. The team is working on solutions that can de‑identify and re‑identify user data in real time, enabling the use of powerful external LLMs without exposing sensitive information. This is particularly complex when dealing with unstructured narrative data, but it is seen as essential for scaling the system responsibly.

Another innovation under development is file uploading, allowing users to submit documents such as affidavits. The AI can then cross‑check structured facts against codified criteria, flag missing information, and strengthen the user’s narrative. Early results suggest that this approach could operationalize expert checklists at scale, providing coaching and surfacing critical gaps.

The long‑term vision is an AI “legal coach” that supports users throughout their journey. Pre- pathway, it could analyze initial documents and triage users to the right process. Afterward, it could suggest clarifications, request additional documents, and guide preparation for hearings. During hearings, it could help organize evidence, draft statements, structure note‑taking, and interpret proceedings.

 

Evaluation of AI Systems for Judicial Actors. A Protocol to Embed AI in Organizational Design

Dominique Boullier began by emphasizing that discussions should not focus on “AI” in the abstract, but on AI systems embedded within organizational design. His research, conducted with students before the release of ChatGPT, examined how legal professionals interact with decision‑support technologies. The emphasis was on pluralism of systems and the need to adapt them to the real processes of judicial actors, rather than treating AI as a monolithic or autonomous entity.

From fieldwork in private law firms, Mr. Boullier observed how professionals appropriate these tools and how organizational context shapes their use. He insisted that AI systems are never standalone: they are software embedded in workflows, collective processes, and institutional structures. Evaluation, therefore, must go beyond technical performance to consider integration, empowerment, and autonomy of actors. A key risk he identified is the “division of learning,” where companies providing AI systems extract expertise from professionals, leaving the latter less capable of improving their own practices.

To address this, Mr. Boullier and his team developed a grid of indicators combining quantitative and qualitative dimensions. Importantly, evaluation was participatory: stakeholders themselves assessed relevance and impact within their cultural and professional environments. The grid included mapping of stakeholders (internal and external), analysis of learning databases (sources, interoperability, updating), user experience testing, and organizational strategy. This approach highlighted the need for precise descriptions and continuous testing—practices often neglected in AI adoption.

Mr. Boullier noted that many systems initially performed well but quickly lost momentum due to lack of training, updating, and workflow integration. He argued that successful adoption requires ongoing maintenance, redesign, and organizational transformation, which is often more demanding than technical development itself.

In conclusion, Mr. Boullier cautioned against the “disembedding of calculations”—the risk that algorithmic outputs become detached from organizational and cultural contexts. While LLMs may reintroduce some semantics, they remain statistical tools that cannot provide comprehensive understanding. For judicial actors, evaluation must therefore be participatory, context‑sensitive, and aligned with organizational realities. Trust, transparency, and stakeholder involvement are essential to ensure that AI systems empower rather than disempower professionals, and that their integration strengthens rather than undermines institutional practices.

 

Teaching Law in the Era of GenAI

Hannes Westermann then concluded the panel discussion by addressing the implications and challenges arising from the advent of generative artificial intelligence in teaching law.

The speaker then introduced the dilemma of shaping future lawyers that are well-adapted to this new reality where AI is omnipresent while ensuring that they learn and retain the essential skills of a good lawyer. He then demonstrated how AI could be used by students for writing a master’s level thesis in 20 minutes.

By the speaker, the use of generative AI could be explained through three analogies:

  • The calculator analogy: AI is a tool, but the real high‑level thinking lies with the person using it.
  • The research assistant analogy: AI goes beyond mere calculation, acting like an entity that provides feedback and may even correct the user.
  • The forklift in a gym analogy: AI is highly effective at executing tasks, but users miss out on the benefits and learning that come from doing the work themselves.

He then summarized the guidelines regarding AI that have been implemented at Maastricht University at both university and law faculty levels. These guidelines acknowledge the undeniable permanence of AI in the legal profession’s future, while also providing guidance to students on which AI tools are permitted, for what purposes, and under what conditions. They ensure responsible use, transparency and fairness among students.

As concluding remarks, Mr. Westermann contemplates the question if we should get rid of examination for a system that focuses on lifelong learning.

 

Summary written by Étienne Dussault.


Photos

This content has been updated on 14 November 2025 at 9 h 34 min.