Young Researchers Panel: AI&Tech, Justice and Politics

On this page, you will find (click to jump):


Recording




Biographies

CHAIR

Speaker Photo

Fabien Gélinas

Professor Fabien Gélinas, Ad. E., teaches and conducts research in the areas of international dispute resolution, common law and civil law contracts, commercial law, law and technology, and legal theory. Formerly General Counsel of the International Court of Arbitration, he acts as arbitrator, expert and consultant on dispute resolution and legal reform.

Professor Gélinas is a cofounder of the Montreal Cyberjustice Laboratory. He has taught at the Centre d’études diplomatiques et stratégiques de Paris (École des hautes études internationales), the Université de Paris II - Panthéon Assas, the National University of Rwanda in Butare, Trinity College Dublin, Sciences Po Paris, New York University and the National University of Singapore.

PanElistS

Speaker Photo

Jinzhe Tan

Jinzhe Tan is a PhD student in Artificial Intelligence and Law at the Faculty of Law of the University of Montreal. His research explores the intersection of artificial intelligence and law, with a focus on improving the accessibility of law and promoting technology-assisted dispute resolution. His work also examines how artificial intelligence can help mitigate decision-making deficiencies in the judicial process, thereby contributing to a more equitable and consistent justice system.

Jinzhe has published in leading AI and legal informatics conferences, including ICAIL and JURIX. His research spans multiple dimensions of AI applications in law, from using Large Language Models to structure legal knowledge to employing AI for dispute resolution and layperson legal understanding. His contributions have been recognized internationally, with a co-authored paper receiving the Best Paper Award at JURIX 2023. Through his work, Jinzhe aims to advance the responsible integration of AI into legal systems, ensuring that technology enhances fairness, consistency, and access to justice.

Speaker Photo

Gabrielle Boily

Gabrielle Boily is pursuing a master’s degree in psychology with a thesis at Laval University. With a bachelor’s degree in law and a bachelor’s degree in psychology completed concurrently, she is interested in the links between these two disciplines. After a semester at Paris 1 Panthéon-Sorbonne University, she began her research career with an article on mental health courts, published after winning the Cahiers de droit essay contest. Her career path then led her to study human-machine interaction, particularly artificial intelligence, as part of her work at the LEILAH Laboratory under the supervision of Professor Alexandre Marois. Her innovative research on the impact of AI on cognitive processes and on the use of AI tools to improve access to justice has received support from Obvia and NSERC in the form of grants, attesting to its scientific and social relevance.

Speaker Photo

Sarah Sutherland

Sarah A. Sutherland is a researcher and consultant specializing in legal informatics, data strategy, and the intersections of law, technology, and information. She is principal consultant at Parallax Information Consulting, where her work focuses on integrating data-driven approaches into legal organizations' planning and operations. Her book, Legal Data and Information in Practice: How Data and the Law Interact (Routledge, 2022), examines the ways data shapes, and is shaped by, legal systems. Previously, she served as President and CEO of the Canadian Legal Information Institute (CanLII), where she oversaw the largest open-access legal information platform in Canada.

She has been recognized for her contributions to the field with the Fastcase 50 award (2022) and the Dennis Marshall Award for Excellence in Law Librarianship from the Canadian Association of Law Libraries (2023). She is currently pursuing a PhD in Law at the University of Edinburgh, with an anticipated completion date of 2027.


 

Summary

The concluding panel of the ACT conference was chaired by Fabien Gélinas (McGill University) and featured presentations by emerging researchers whose work engages with the intersections of law, AI, technology, justice, and politics. The panelists were Jinzhe Tan (Université de Montréal), Sarah Sutherland (University of Edinburgh), and Gabrielle Boily (Université Laval). Each presenter brought a different disciplinary approach to the discussion of how artificial intelligence intersects with legal decision-making, access to justice, and the predictive potential of data in law.

Jinzhe Tan opened the panel with a presentation titled “Enhancing Judicial Autonomy Using Artificial Intelligence.” He began by exploring how human decision-making functions cognitively, referencing the dual-process theory of reasoning. According to this model, human thought involves two systems: System 1, which is fast, automatic, and intuitive, and System 2, which is slow, deliberate, and effortful. AI researchers have drawn inspiration from this framework in designing systems that mimic human cognition. While traditional AI tools often replicated the fast and intuitive system, current developments—such as large language models like ChatGPT-5—are increasingly focused on replicating the slower, more analytical System 2.

Tan then applied this cognitive framework to judicial reasoning. Ideally, judges are expected to rely on System 2 reasoning to reach decisions based on legal principles and evidence. However, as legal realists have long argued, judges are human beings and susceptible to cognitive shortcuts, especially under conditions of stress, time pressure, and information overload. These shortcuts often result in reliance on System 1, where extralegal and even irrational factors can influence outcomes. He illustrated this with the example of Justice Antonin Scalia, who once advised male lawyers not to appear in court with a ponytail, suggesting that appearance could influence judicial perception.

Tan cited numerous studies on extralegal influences in judicial decisions, including the widely cited “hungry judge effect,” which showed that judges tend to issue harsher rulings just before lunch, when they are likely hungry and fatigued. Academic research has since categorized judicial biases into three main types: cognitive, emotional, and socio-economic. This research underscores that judicial decisions are not always purely rational but are often affected by emotions and external context.

Given these issues, Tan asked whether AI might be a better decision-maker. He referenced studies comparing human decisions with algorithmic predictions in the context of pre-trial release, specifically whether defendants would flee or reoffend. The results suggested that algorithmic models could reduce crime by at least 14% and jail rates by at least 18%. This raises the provocative question: should AI replace judges?

Tan answered this question with caution. While AI systems are data-driven and less vulnerable to the kinds of biases that affect human judges, they are far from perfect. The “hungry judge” effect, for instance, has since been challenged by follow-up studies with better methodologies and larger datasets. Furthermore, most AI models are trained on biased data and reflect societal inequalities. AI, he argued, is a “distorted mirror” of human reasoning. It can help us understand our cognitive limitations, but we should not rely on it to make decisions for us.

Instead, AI should be used to reduce judges’ workloads and support better decision-making rather than replace human judgment. For example, AI can improve access to information, assist in online dispute resolution, and manage administrative tasks that overload the court system. Tan concluded that judicial autonomy must be preserved, with humans remaining at the center of legal decision-making. AI’s value lies in its ability to help us understand ourselves and improve the systems we already have, not replace them entirely.

The second speaker, Gabrielle Boily, presented her research titled “The Use of AI Tools to Improve Access to Legal Information in Quebec.” Boily’s work is situated at the intersection of access to justice, AI-human interaction, and cognitive psychology. She began by outlining a growing issue in the Quebec legal system: an increasing number of litigants are unrepresented. She argued that e-justice platforms and AI tools can help bridge the access gap, provided they are reliable, ethical, and inclusive. Her central research question was whether currently accessible AI tools in Quebec effectively support public access to legal information.

Boily used a conceptual framework informed by Canada’s 2023–2027 Department of Justice Strategic Plan, which emphasizes the need for a fair and accessible justice system. She also referenced Paragraph 2(b) of the Canadian Charter of Rights and Freedoms, which grants the public the right to be informed about public institutions, and Article 128 of the Quebec Bar Act, which distinguishes between legal information and legal advice—a line that AI tools often blur.

Her methodology combined a framework for evaluating AI in judicial settings with a query-based performance analysis of two tools: JusticeBot, developed by the Cyberjustice Laboratory, and ChatGPT. She focused on housing law in Quebec, using legal situations based on statistical data from the Tribunal administratif du logement’s 2023–2024 annual report. Her evaluation framework was adapted to test the tools’ effectiveness in answering specific legal queries.

The results revealed notable differences between the two systems. JusticeBot performed strongly, delivering three complete and accurate answers that were supported by citations from legal statutes and case law. It also included legal sources and examples, demonstrating a transparent and robust design that incorporates user feedback. ChatGPT, in contrast, produced two accurate responses and one inaccurate answer. While its conversational format offered clarity, it rarely provided legal references, and its tendency to give highly specific advice blurred the line between legal information and legal counsel. This is particularly problematic given the potential legal implications of relying on inaccurate or overly detailed AI-generated responses.

Boily concluded that while the diffusion of AI tools holds promise for improving access to legal information, it also carries risks. Inaccurate or misleading legal content can confuse users and slow judicial processes. Therefore, AI evaluation must remain dynamic, participatory, and responsive. Formal and adaptable evaluation processes are necessary to ensure AI systems serve the public effectively and ethically.

The final speaker, Sarah Sutherland, delivered a presentation titled “Process, Purpose, and Time: Examining the Gap Between Judicial Decision Making and Legal Prediction.” Her work sought to better understand how time and cultural context influence legal reasoning and data use in computational systems. Sutherland highlighted the historical coexistence of humans and calculating machines from 1870 to 1970. She framed the current debate about AI and legal prediction as the continuation of a long-standing relationship between human reasoning and machine computation.

Sutherland focused on the notion of prediction as a tool that connects the past and the future. Legal prediction is not just about probability—it also has significant economic implications, such as increased efficiency and better risk assessment. However, she emphasized that legal systems do not—and cannot—promise justice. Courts cannot guarantee correct outcomes, and the underlying logic of prediction is often flawed or incomplete.

She discussed the challenge of determining what data should be used to train legal AI systems. One common assumption is that judicial decisions can be used as reliable indicators of correct outcomes. However, Sutherland questioned this assumption, noting that many elements of the legal system are not recorded or available as data. For example, a significant number of legal disputes are settled through private agreements and never make it into case law. Therefore, the outcomes we can predict are only the ones visible to us, leaving much of the legal landscape in the dark.

Moreover, she pointed out that the law itself is dynamic. What is considered legally correct today may not have been the case a year ago, and may not be true a year from now. This temporal variability makes prediction inherently difficult, if not unreliable, over the long term. Sutherland’s central argument was that legal prediction tools often fail to capture the deeper cultural, temporal, and procedural complexities of the law, and that any computational model must take these factors into account if it hopes to be useful or meaningful.

The panel concluded the ACT conference by offering rich, nuanced perspectives on the promise and limitations of AI in legal contexts.

 

Summary written by Ali Ekber Cinar.


Photos

 

This content has been updated on 5 December 2025 at 9 h 55 min.