Clear safeguards needed around technology planned for border checkpoints
Automated systems can affect decision-making for things like immigration and refugee applications

This column is an opinion by Petra Molnar and Jamie Liew. Molnar is a lawyer and the associate director of the Refugee Law Lab at York University. Liew is an immigration lawyer and an associate professor at the University of Ottawa, Faculty of Law. For more information about CBC's Opinion section, please see the FAQ.
As the European Union lays out its long-awaited proposal for regulating the use of artificial intelligence (AI), Canada is signalling a very different approach when it comes to potentially high-risk uses of AI-driven technologies. As part of the newly released 2021 budget, the Canadian Border Services Agency (CBSA) is receiving $656 million to be spent in part on technology such as facial recognition systems at the border.
According to the 2021 budget, the significant cash influx will allow the CBSA to, "utilize new technologies, like facial recognition and fingerprint verification," and "develop strategies to ensure the equitable application across differences in gender, age, mobility, and race, to promote the security of all travellers."
However, it is unclear what if any safeguards exist when it comes to this type of technological experimentation at the border, or what "equitable application" means when we know that AI-based technologies are anything but neutral.
Facial recognition encompasses a class of automated technologies that verify or identify people and analyze behaviour based on the biometrics of their faces. As shown by recent reports and studies — as well as submissions by the Canadian Bar Association to the federal government and a recent report to the UN General Assembly by the Special Rapporteur on Discrimination — intrusive technology like facial recognition and other automated AI systems can exacerbate systemic racism and oppression, dehumanize people, and contravene various domestic and internationally protected human rights.
These technologies can make racist and sexist inferences that have far-reaching impacts in immigration settings, for example.
This was demonstrated by the European Union's recently shelved pilot project iBorderCRTL. The AI-powered lie detector deployed at the border was widely criticized for discriminating against people of colour, women, children, and people with disabilities, resulting in a court challenge.
A similar lie detection system called AVATAR has been tested at the CBSA science laboratory in Ottawa. A CBSA spokesperson said the agency "is not considering using AVATAR on real travellers in the future."
Meanwhile, in February Canada's Office of the Privacy Commissioner deemed unlawful the mass surveillance perpetuated by the private company Clearview AI, which enabled law enforcement and commercial organizations to match billions of images of faces across its databases.
It said, "Commissioners found that this creates the risk of significant harm to individuals, the vast majority of whom have never been and will never be implicated in a crime ... These potential harms include the risk of misidentification and exposure to potential data breaches."
Why is this type of technology being celebrated and rolled out at Canadian borders?

Canada's techno-solutionist approach is in stark contrast to the EU's proposed regulation, which while far from perfect, sets out various bans and parameters around high-risk uses of AI, including at the border and in immigration decision-making.
Meanwhile, the CBSA operates with little transparency under the cover of national security and border control rationales, and without meaningful mechanisms of oversight. The establishment of a watchdog is in the 2021 mandate letter from the federal government.
Technology is not neutral. It reinforces power dynamics and exacerbates biases that are already inherent in the opaque and discretionary decision-making which plagues immigration and refugee applications and the border space.
Recognizing these far-reaching impacts in various other areas like policing and surveillance, more and more people are calling for a ban on facial recognition and other automated technologies. Even Hollywood has recognized the importance of engaging in these conversations, with the recent Coded Bias documentary examining the ways algorithms perpetuate race, class, and gender-based inequities.
Based on the local and global calls for bans and regulation, why is more funding being given to high risk technologies in Canada?
It is telling that Canada refuses to engage in meaningful conversations around the regulation and governance of technologies such as facial recognition at the border. The context of immigration and borders matters here. In an increasingly racist and anti-migrant world, border spaces have become testing grounds for the development and deployment of unregulated technologies, with very little accountability and oversight regarding the potentially far-reaching impacts on people's rights and lives.
Canada is clearly rushing to the front lines of the global AI arms race without adequately heeding its harmful manifestations.
The sharpest edges of this technology that we have been seeing in refugee camps and other border spaces may seem far removed from places like Canada's international airports. Unfortunately, the easy way in which technology slips from context to context may mean facing immigration detention or security inferences made against you based on technology that is a manifestation of systemic racism or nothing more than snake oil.
- This column is part of CBC's Opinion section. For more information about this section, please read our FAQ.
Clarifications
- This column originally noted that an automated lie-detection system has been pilot-tested by Canada Border Services Agency. After publication, the CBSA contacted CBC to clarify that it "has not pilot tested the deception detection system AVATAR on the travellers at the border ... this testing was solely conducted at the CBSA science laboratory in Ottawa, and not in any operational setting. The CBSA has not conducted further testing of AVATAR beyond this internal testing. Additionally, the agency is not considering using AVATAR on real travellers in the future."May 07, 2021 3:23 PM ET