Canada Treasury Board’s Directive on Automated Decision-Making

With breakthroughs in machine learning, big data analytics, and recent advances in natural language processing, the use of artificial intelligence (AI) techniques has indeed the potential to improve user experience and the efficiency of service delivery, as well as to reduce the time and cost of making manual decisions.  

 

In the history of digital transformations, AI is considered the most disruptive, rewiring not only people’s day-to-day interactions but also challenging taken-for-granted legal concepts in ways that could scarcely be anticipated. 

 

 

 

In order to promote an ethical and responsible use of emerging technologiesi, the Canada Treasury Board’s Directive on Automated Decision-Making, issued under the authority of the Financial Administration Actii, takes effect on November 26, 2018. It outlines the responsibilities of federal institutions using AI-automated decision systems. Supporting a host of policies in the federal public administration, the Directive aims at helping to better understand and better ensure an ethical and responsible implementation of AI. Compliance with its requirements is expected from all federal institutions by no later than April 1, 2020. 

 

 

 

The Directive applies to the use of Automated Decision Systems that provide external services and recommendations about a particular client, or whether an application should be approved or denied. Within the meaning of the Directive, an “Automated Decision System” includes any information technology designed to produce measurements or assessments of a particular individual’s case, and meant to directly aid a human in their decision-making, and/or designed to make an administrative decision in lieu of a human decision maker. 

 

In so doing, the Directive provides for an Algorithmic Impact Assessment to be completed prior to the production of any Automated Decision System to be used in federal administration. The Algorithmic Impact Assessment is an interactive questionnaire, designed “to help institutions better understand and mitigate the risks associated with Automated Decision Systems and to provide the appropriate governance, oversight and reporting/audit requirements”.  

 

The Impact Assessment comprises four levels, ranging from Level I decisions, which often lead to impacts that are reversible and brief, to Level IV decisions, which often lead to impacts that are irreversible and perpetual (Appendix B). The impact levels are to be assessed in light of the department’s business process, data and system designed decisions, as regards to: 

 

  • the rights of individuals or communities, 
  • the health or well-being of individuals or communities, 
  • the economic interests of individuals, entities, or communities, and 
  • the ongoing sustainability of an ecosystem. 

 

Depending on the decision impact levels, safeguard requirements rise up in terms of peer review, notice requirement, human intervention and explanation requirements for profiling and recommendations/decisions, training, contingency planning and approval for the system to operate (Appendix C). These safeguards have been devised in light of two overarching concerns, quality assurance and transparency, in the implementation of ethical and responsible IA.  

 

With respect to quality assurance, the testing and monitoring requirements apply across all impact levels, with a two-fold objective:  

 

  • to ensure that training data is tested before going into production for unintended data biases and other factors that may unfairly impact the outcomes; 
  • to safeguard on an ongoing basis against unintentional outcomes, and to conduct routine testing of data being used by the Automated Decision System to ensure it is relevant, accurate and up-to-date.   

 

From Level II onwards, the Automated Decision System will be reviewed by at least one (Level II and III) to two experts (Level IV); this includes publishing specifications of the Automated Decision System in a peer-reviewed journal. Documentation on the design and functionality of the Automated Decision System will also be provided (Levels II to IV), as well as mandatory training courses (Level III) and re-occurring training courses (Level IV). In any case, the final approval for the system to operate must be granted by the Deputy Head (Levels II and III) or the Treasury Board (Level IV).  

 

Another key mechanism for delivering ethical AI is being open and transparent. In the IT context, and consistent with Government of Canada’s open data initiatives, this means ensuring, where possible, that the software components of the Automated Decision Systems are obtained under an open source license, and that all the source code used for the Automated Decision Systems be published, unless it is classified or statutory exceptions applied, as provided under the Access to Information Act 

 

Transparency to citizens means also intelligibility within the decision-making process. That being said, Levels I and II decisions, involving (likely) reversible and brief to short-term impacts, require no explanation for profiling and may be rendered without direct human involvement. As for Levels III to IV decisions, they cannot be made without having specific human intervention points during the decision-making process, and the final decision must be made by a human. The explanation requirement for recommendations or decisions ranges from a static explanation of common decision results (e.g. FAQ), to meaningful explanation provided upon request (Level II), or with the decision rendered (Levels III and IV). The notice requirement applies in varying degrees to Levels II to IV decisions, from plain language notification posted on the program or service website (Levels II to IV) to on-time notice and other additional information (Levels III and IV), such as a description of the training data and the criteria used for making the decision.  

 

Such a risk-based approach is most welcome considering the machines’ well-known tendency to replicate unintentional systemic biases instead of furthering contextual policy objectives, and their vulnerability to data manipulation. Ultimately, the challenges posed by the use of automated decision systems by public agencies revolve around public accountability and thorough reflections, as regards the appropriateness of automation on a case-by-case basis. 

Jie Zhu

Ce contenu a été mis à jour le 4 juillet 2019 à 16 h 19 min.