Regulating (Artificial) Intelligence in Justice: How Normative Frameworks Protect Citizens from the Risks Related to AI Use in the Judiciary

Abstract

Recently there has been a growing diffusion of tools based on AI technology supporting justice professionals. Artificial Intelligence algorithms are starting to support lawyers for instance through artificial intelligence search tools, or to support justice administrations with predictive technologies and business analytics based on the computation of Big Data. The introduction of AI tools in the justice sector poses several implications as for instance (1) the availability of data coming from courts and proceedings and issues in terms of protection of privacy or (2) the use of predictive technologies and issues regarding data protection, discrimination biases and transparency. Private and public actors are growingly dealing with the risks related to the use of AI by developing normative frameworks that discipline AI application in several contexts. However, most of the normative frameworks are not binding and only deal with some of the many concerns related to the impact of AI in justice. The paper has two objectives: first to analyse the main challenges related to the use of AI both by lawyers and by the justice administrations through some examples of AI tools recently developed; second, to assess a selection of the most important frameworks disciplining the application of AI in several contexts developed by different types of actors from international forum to private companies or national and EU parliaments. The analysis acknowledges the several risks related to the use of AI in justice; moreover, it draws the attention to the lack of comprehensive and binding normative frameworks regulating AI….

Ce contenu a été mis à jour le 11 août 2020 à 13 h 24 min.