The conversation | AI technologies — like police facial recognition — discriminate against people of colour

Jane Bailey, Jacquelyn Burkell and Valerie Steeves highlighted in this article published on the website The Conversation the racist bias of AI technologies like facial recognition technology which discriminate against people of colour. 

Facial recognition technology that is trained on and tuned to Caucasian faces systematically misidentifies and mislabels racialized individuals: numerous studies report that facial recognition technology is “flawed and biased, with significantly higher error rates when used against people of colour.” This undermines the individuality and humanity of racialized persons who are more likely to be misidentified as criminal. The technology — and the identification errors it makes — reflects and further entrenches long-standing social divisions that are deeply entangled with racism, sexism, homophobia, settler-colonialism and other intersecting oppressions.

To read the full article

This content has been updated on 26 November 2020 at 9 h 05 min.