How artificial intelligence could change Canada's immigration and refugee system
Originally published on November 18, 2018.
According to a new report, the federal ministry responsible for immigration has been quietly experimenting with artificial intelligence since at least 2014.
Mathieu Genest, a spokesman for Immigration Minister Ahmed Hussen, told The Canadian Press the system was being used exclusively as a "sorting mechanism" to quickly separate complex from standard visitor visa applications for immigration officers. Earlier this year, the A.I. system was used to process temporary resident visa applications from China and India.
The technology, Genest said, would help "process routine cases more efficiently," but immigration officers would always make the final decisions about granting and denying a visa.
But human rights experts have grave concerns about how A.I. will change the immigration system — and about what it will mean if computers someday make some decisions autonomously.
"A.I. is by no means neutral. It can be just as biased, if not more biased, as a human being," refugee lawyer Petra Molnar told The Sunday Edition's host Michael Enright.
"The use of this technology on highly vulnerable and high-risk people has great ramifications, and really, they're the last group that should be the subject of these technological experiments."
Molnar is the co-author of the report Bots at the Gate, which was produced by the International Human Rights Program at the University of Toronto Faculty of Law and The Citizen Lab.
Here is part of their conversation.
How did you first learn that Canadian immigration and refugee officials were beginning to use artificial intelligence?
In February 2018, we held a panel at the law school at the University of Toronto about the dark side of technology. We came across some public statements by the Canadian government officials talking about wanting to experiment with the use of this new technology, dated back all the way to 2014.
Then in May 2018, we became aware of what's called a Request for Information — basically a tender from the government, calling for the private sector to develop "artificial intelligence solutions" for front-end immigration decision-makers, and also for determining the merits of various applications.
We already know that A.I. has a pretty bad track record when it comes to gender or race determinations.- Petra Molnar
What really caught our eye was that in this RFI, the two applications that were singled out by name were Humanitarian and Compassionate applications and Pre-Removal Risk Assessments.
What is a Pre-Removal Risk Assessment?
Pre-Removal Risk Assessment is an application that a person can use to have the risks of hardship, or torture, or risk to their life assessed before they are removed from Canada. It's an application of last resort — similarly with the Humanitarian and Compassionate application. A person provides evidence of why they would be facing risks if they were removed, and then an officer makes a determination based on that. Most of these don't have an oral hearing. A lot of this is just written evidence.
When I saw these two applications in particular singled out, it really was quite troubling, because they're extremely discretionary. Interestingly enough, we're really only dealing with a couple of thousand applications a year. So it's curious why they're singled out.
The experiments seem to have been conducted by the government very quietly.
That's right. We have had some really fruitful discussions with government officials, and it seems that the government is moving toward using these technologies to augment, not replace, human decision-makers.
But we submitted 27 access to information requests back in April, and we have not actually received any data. So we're not totally sure what the government is or is not doing.
What makes immigration "a high risk laboratory for experiments and automated decision-making," as you write in the report?
As immigration and human rights lawyers, we see immigration law as highly discretionary, with weak safeguards and oversights.
We already know that two human officers, who are looking at the exact same application, with the exact same set of evidence, can make two completely different decisions. So we're not sure what this is going to look like, if a decision-maker is replaced or augmented by A.I.
The sheer scale of the potential impact is quite extraordinary as well, because hundreds of thousands of people enter Canada every year through a variety of different applications. Many come from war-torn countries.
Is A.I. bias-free?
We already know that A.I. has a pretty bad track record when it comes to gender or race determinations. For example, in the States there was a big piece that came out about using A.I. for predictive policing. Perhaps not surprisingly, the results were biased against racialized people.
In terms of gender, A.I. makes really problematic assumptions as well. There is an algorithm that was trying to learn about men and women, and it would start equating man with doctor, and woman with kitchen.
Are there are other jurisdictions, other countries' immigration departments using it?
That's something that we tried to look at in the report — to do a cross-jurisdictional analysis of what's going on internationally. We already know that mistakes have been made. In the UK, over 7,000 students were wrongfully deported based on a faulty algorithm that said that they cheated on a language acquisition test. By the time it was discovered that there was a mistake, these people had already left.
Was there no human oversight on this?
Well, we don't know. It's very difficult to determine. Then we also have to think about, what does human oversight actually mean? Is it enough to [have] a decision popped out by an algorithm, and a human decision-maker rubber stamps it and says, "Oh yep, I checked this over?" Or do there have to be rubrics that need to be followed for actual oversight?
Are there any decisions in the Canadian immigration system being made solely by computers today, without any kind of human intervention or oversight?
It appears no. Experimentation is purely used to "augment" human decision-making. But still this is a problem, because we don't know what oversight mechanisms are going to be in place.
We need to think about how introducing a completely different system of thinking into a really complex area is going to pan out. We're dealing with really nuanced and complex claims for protection or immigration, and the worry for us is that the nuance might be lost on these technologies — leading to serious breaches of international and domestically protected human rights.
Can you build in safeguards? Is there some way to create create a structure where the civil rights of immigrants and refugees are protected?
Absolutely. We recommend that Ottawa establishes an independent arm's length body to … review of all uses of automated decision-making, because it's not just in the immigration space. We know they're experimenting with using A.I. for tax purposes, and in the health care sector. Also, publishing all current and future uses of A.I. by the government for transparency purposes. We really need to know what is being experimented with.
Yuval Noah Harari says, "If the world gets into an A.I. arms race, it will almost certainly guarantee the worst possible outcome." That's kind of scary, isn't it?
Yes. We know technology travels, and it will move from country to country. Our worry is is that some of these experiments that are being done in rights-protecting countries like Canada might then be used by other countries with weaker rights protections. An arms race in this space is quite dangerous.
It's a whole new frontier, in terms of human rights and refugee law. We already know that this is happening in some way, shape or form. So it's important that we start thinking about the ramifications of A.I. on human rights.
This Q&A has been edited for length and clarity. Click 'listen' above to hear the full interview.
Since this interview first aired last November, the federal immigration department has confirmed that it has been using automated decision-making to assist with temporary visitor permits from India and China. Petra Molnar says the Canadian government is setting up ethical standards for the use of new technology, including internal guidance documents specifically for immigration, but she believes further conversations about the use of A.I. are still required.