Unlawful Distinctions ? : Canadian Human Rights Law and Algorithmic Bias
Jacquelyn BURKELL et Jane BAILEY, « Unlawful Distinctions? : Canadian Human Rights Law and Algorithmic Bias », (2018) Canadian Yearbook of Human Rights 217.
Jacquelyn BURKELL et Jane BAILEY, « Unlawful Distinctions? : Canadian Human Rights Law and Algorithmic Bias », (2018) Canadian Yearbook of Human Rights 217.
Human rights legislation in Canada prohibits discrimination in the provision of goods and services to the public, in employment, and in accommodation based on certain grounds, such as race, gender, sexual orien- tation, and gender identity (e.g. Canadian Human Rights Act, sections 3 and 5). In so doing, it imposes equali- ty-based restrictions on certain kinds of decision-making by public and private providers of goods, services, jobs, and accommodation. Increasingly, in order to access these resources, Canadians use and are subjects of online search engines that rely on machine-based algorithms to profile and sort users to personalize search results and ad placement, and to understand and identify cultural categories (e.g. images of “professional hairstyles”). These algorithms not only affect access
to these resources but serve to shape people’s lives in quite fundamental ways. Since the algorithms by which these decisions are made are not ordinarily open to
the public, and since it can be difficult to determine the data over which the analyses are carried, it can be very difficult to determine the criteria used as the basis for the sorting or categorizing process. As a result, citizens are typically unaware whether decisions being made that affect specific individuals—for example, regarding the targeting of information (e.g. advertisements) or assessment or scoring results (e.g. credit risk score)—are based on grounds on which discrimination is prohibited by human rights legislation. However, recent findings suggest that prohibited grounds, such as race, are playing a role in determining who gets access to what information, and that these same algorithms are deter- mining the prices charged for the goods and services purchased.1 Further, algorithms produce representations that can fundamentally affect our understandings of others and ourselves. This paper begins to explore algorithmic bias and its relationship to human rights, highlighting some of the challenges for obtaining mean- ingful responses to algorithmic discrimination under Canadian human rights legislation as currently framed and interpreted.
Ce contenu a été mis à jour le 23 août 2024 à 12 h 55 min.