How AI Is Being Used in Canada’s Immigration Decision-Making

Published 2 April 2020 / By Roxana Akhmetova

Back to Articles

In recent years, the Canadian government has grown increasingly reliant on artificial intelligence (AI) and automated decision-making systems in the immigration and border management space. Since such systems and technologies are novel in practice, their use can be said to be experimental in nature. But as a result, this approach has the potential to result in increased discrimination towards highly vulnerable populations, which undermines the public image of Canada as a ‘pro-refugee’ country. If used without scrutiny and adequate protections, this technology could greatly limit immigrants and asylum seekers in their abilities to defend their own rights, and access a secure livelihood.

So, what’s the right path forward?

Before exploring Canada as a case study, we first need a bit of context. Artificial intelligence and other technologies are used to supplement or replace human decision-making. These technologies rely on a variety of data inputs, like linguistics and statistics, and use techniques such as regression, rule-based systems, machine learning, and deep learning, amongst others. Automated decision-making systems use algorithms, or a set of instructions, to organize a body of data in order to achieve an outcome. In the sphere of migration, automated decision-making systems and technologies assist administrative tribunals, immigration officers, border agents, legal analysts, and other key players in the process.

These systems may be used for a variety of purposes and by a variety of actors, ranging from governments predicting the risk of recidivism in pre-trial detention and sentencing decisions, to private employers using AI to decide who should be hired or fired from their job based on work performance, and police units predicting crime ‘hotspots’. Algorithms can be trained to classify and generalize beyond examples provided in a data set. The quality of data that is used to train a system greatly impacts the output data. Typically, such systems assume that the future will look like the past, so if there are any unfair biases, they will be reproduced in future outputs. Using technology and artificial intelligence can be perceived to be neutral, logical, and consistent, thus affording it heightened legitimacy and weaker human accountability. It doesn’t take a sci-fi fan to know that this is undoubtedly problematic, as all technology has the potential to inherit our discriminatory biases and be political in its decisions.

Non-citizens tend to already lack adequate human rights protections and resources to defend those rights. So, for individuals going through an immigration system, threats to stability like extensive delay, high financial cost, interrupted work and studies, detention, prolonged family separation, and deportation are all possible. Therefore, AI and automated decision-making processes can exacerbate these pre-existing vulnerabilities by adding on risks such as bias, error, system failure, and theft of data, all of which can result in greater harm to migrants and their families. A rejected claim formed on an erroneous basis can lead to persecution on the basis of an individual’s race, religion, nationality, membership in a particular group or political opinion, for example, thus exposing the individual to threats of torture or a risk to their life.

In Canada, all initial immigration decisions are made either by an administrative tribunal, such as the Immigration and Refugee Board, or individual immigration officers, employed by Immigration, Refugees and Citizenship Canada (IRCC) or by the Canada Border Services Agency (CBSA). In 2018, it was reported that the CBSA used private third-party DNA services such as to establish the nationality of individuals subject to potential deportation.

This is deeply concerning, not only because of the coercive nature of the privacy invasion, but, also, because one’s DNA is not related to nationality and should bear no impact on one’s application. Proponents may argue that the uses of these systems and technologies on refugees and immigrants are still exploratory, and their impacts are not very well known. But as more of these types of technologies are incorporated into the Canadian government’s mechanisms, it is imperative that their adoption is transparent, accountable, fair, and, ultimately, respectful of human rights.

Another reason to focus on this issue is the scale of potential impact. By the end of 2019, there were over 87,000 claims that had been referred on or after December 15, 2012 yet have not been finalized. By 2019, there were over 404,000 students holding an international study permit, over 98,000 work permits issued under the Temporary Foreign Worker Program, and over 307,000 work permits issued under the International Mobility Program in Canada. Clearly, there are millions of individuals who are still being impacted by the automated decision-making systems and technologies.

With Canada as a case study, this piece offers policy recommendations to improve the best practices in regard to this emerging technology. Increase the level of collaboration between key government agencies and stakeholders, as well as academics and civilians, so as to better understand the current and potential impacts of AI and related technologies on human rights. Furthermore, establish government-wide standards for the use of AI and automated decision systems, and encourage all government bodies using AI and automated decision systems to publish reports revealing how they use said systems.

By fostering a culture of inter-party dialogue and openness, these technologies can either be restricted in the future, or used in manners that benefit those most vulnerable. Information is essential if they are to be just in nature.

About the author: Roxana Akhmetova is currently an MSc candidate in the Migration Studies programme.