How not to create a racist, sexist robot
Robots are picking up sexist and racist biases based on information used to program them predominantly coming from one homogenous group of people, suggests a new study from Princeton University and the U.K.'s University of Bath.
Lead study author, Aylin Caliskan says the findings surprised her.
"There's this common understanding that machines are supposed to be objective. But robots based on artificial intelligence (AI) and machine learning learn from historic human data and this data usually contain biases," Caliskan tells The Current's Anna Maria Tremonti.
Machine learning takes statistics and information that has been inputted and Caliskan argues it's only until humans become completely unbiased that the possibility of an unprejudiced robot can exist.
According to Christine Duhaime, CEO of Vancouver-based think tank Digital Finance Institute, one way to overcome the bias in AI technology is to actively diversify the industry — one that is dominated by white men.
"Where there is investment in AI there [needs to be] dialogue that takes place to say, 'look, we need to increase the numbers of women in AI so therefore if we are going to give you some funding, you need to take steps to make sure that as an organization, as a university, as a startup, whatever it is that is getting the funding, that we look at how to bring women in.'"
But for Raquel Urtasun, Canada has a lot to be proud of.
Urtasun is the Canada Research Chair in machine learning and computer vision and a co-founder of the University of Toronto's Vector Institute.
"Most of what we call AI these days has been invented in Canada. The origin of deep learning is in Toronto," she tells Tremonti.
The fact that there is bias in the technology is not new to Urtasun but she is optimistic.
"There is an understanding in the research community that we have to be careful and we have to have a plan with respect to ethical correctness of AI systems," she tells Tremonti.
"There is an active development of research in this direction such that no matter who uses the AI system is going to be making fair predictions."
Listen to the full segment at the top of this web post.
This segment was produced by The Current's Catherine Kalbfleisch.