Keynote Address | Legal Contestability and Scientific Falsifiability in AI Decision-Making
On this page, you will find (click to jump):
Recording
Biographies
Opening Speech
Karim Benyekhlef has been a professor in the Faculty of Law at the Université de Montréal since 1989. He has been seconded to the Centre de recherche en droit public since 1990 and served as its Director from 2006 to 2014. He was also the Director of the Regroupement stratégique Droit, changements et gouvernance (Strategic Law, Change and Governance Group), which brings together some 50 researchers, from 2006 to 2014. At the same time, he was the Scientific Director of the Centre d’études et de recherches internationales de l’Université de Montréal (CÉRIUM – the Université de Montréal’s International Research and Study Centre) from 2009 to 2012. He is now the Director of the Cyberjustice Laboratory, which he founded in 2010. The Cyberjustice Laboratory has obtained in 2015 the award «Mérite Innovation» from the Bar of Quebec (Innovation Award). He holds the Chaire de recherche en information juridique Lexum (Lexum Research Chair on Legal Information) and serves as a member of CÉRIUM’s science and advisory committees. He received in 2016 from the Bar of Quebec the distinction Advocatus Emeritus.
Keynote Speaker
Mireille Hildebrandt is Emeritus Professor at Vrije Universiteit Brussels (VUB), where she was appointed by the VUB Research Council on the subject of ‘Interfacing Law and Technology’. She has been co-Director of the Research Group on Law Science Technology and Society studies (LSTS) at the Faculty of Law and Criminology from 2019-2024. She is also Emeritus Professor of ‘Smart Environments, Data Protection and the Rule of Law’ at the Science Faculty, at the Institute for Computing and Information Sciences (iCIS) at Radboud University Nijmegen.
Her research interests concern the implications of automated decisions, machine learning and mindless artificial agency for law and the rule of law in constitutional democracies. Hildebrandt has published 5 scientific monographs, 23 edited volumes or special issues, and over 120 chapters and articles in scientific journals and volumes. She received an ERC Advanced Grant for her project on ‘Counting as a Human Being in the era of Computational Law’ (2019-2024) for COHUBICOL.
Summary
On October 15, 2025, the ACT Conference opened with remarks from Professor Karim Benyekhlef, Director of the Cyberjustice Laboratory and coordinator of the Autonomy through Cyberjustice Technologies (ACT) project.
He began by recalling that since 2018, the ACT project has brought together a multidisciplinary community of researchers to examine how artificial intelligence reshapes the administration of justice. For Professor Benyekhlef, this collective effort has shown that understanding such a complex and uncertain phenomenon requires multiple perspectives, but that interdisciplinarity must never lead to the colonization of one discipline by another. Law, he reminded, must approach AI through its own analytical and normative methods, ensuring that legal requirements are integrated upstream in the design of technological tools.
Drawing on the historian Elizabeth Eisenstein, who observed that even six centuries later we still struggle to grasp the full consequences of the printing press, he urged humility in the face of technological disruption. AI, he noted, is a pharmakon, both a remedy and a poison, capable of advancing justice while posing serious risks to its foundations.
Professor Benyekhlef also warned of the geopolitical stakes of AI regulation. Between the United States’ drive for technological dominance and China’s rapid development, Canada risks losing control over the technologies that shape its justice system. “Whoever controls the standard of the technology controls the market,” he reminded, emphasizing that the development of national normative frameworks is a matter of sovereignty. Justice, he insisted, cannot depend on foreign technologies over which Canada has no control.
He concluded his address by inviting participants to approach the conference with a critical and pragmatic mindset. The goal, he said, is to move beyond the industry’s euphoric promises and examine the real capabilities of AI tools in law, distinguishing between what is possible and what merely sounds persuasive.
Following this introduction, the opening keynote of the conference was delivered by Professor Mireille Hildebrandt (Vrije Universiteit Brussel), a leading scholar at the intersection of law, philosophy, and computer science. Her address explored the parallel between scientific falsifiability and legal contestability, two principles she described as essential to both reliable knowledge and the rule of law. Just as scientific theories must remain open to falsification, the legitimacy of legal decisions depends on their contestability through institutional checks and balances.
To illustrate this, she recalled the metaphor of Odysseus and the Sirens, showing that the rule of law is not about self-binding but about organized counter-power, the capacity of institutions to resist arbitrariness. She concluded the metaphor by emphasizing that, ultimately, “it’s about checks and balance.”
Professor Hildebrandt then introduced a Typology of Legal Technologies, developed within her European Research Council project Counting as a Human Being in the Era of Computational Law. This open-access resource maps dozens of AI systems used in judicial contexts, examining who deploys them, how their claims are substantiated, and whether they can truly be assessed for reliability and accountability.
She warned that the current fascination with explainable AI risks distracting legal scholars from what truly matters, which is justification. “Explaining how a system works is not the same as justifying its effects,” she argued, urging researchers to study how AI decisions can be reasoned, challenged, and justified within the legal order.
Building on Karl Popper and Charles Sanders Peirce, she called for a return to theoretical rigor and falsifiability in AI research. Scientific validity, she reminded, arises not from verification metrics like accuracy or precision, but from the ability to test and falsify theories against real-world evidence. Without falsifiable theoretical frameworks, AI risks becoming a domain of belief rather than knowledge.
She concluded by linking this epistemic responsibility to legal practice itself, explaining that both science and law rely on procedures that make decisions contestable. Ensuring that AI systems used in justice remain falsifiable and open to scrutiny is, ultimately, what safeguards the rule of law in the age of artificial intelligence.
Summary written by Maryam Akhlaghi.
Photos





This content has been updated on 31 October 2025 at 10 h 01 min.

