Canada’s immigration department rejected Nigerian scientist Kémy Adé’s PR application after an AI-assisted review generated incorrect job duties that didn’t match her actual work…
Canada’s Immigration Department rejected an applicant, Nigerian Kemi Ade, because the duties of her current job did not match the Canadian work experience she had claimed, but the Department’s AI assistant had invented that work experience. She has been working in Canada as a health scientist — she has a Ph.D. in the immunology of aging — but the AI genius instead described her as “wiring and assembling control circuits, building control and robot panels, programming and troubleshooting.” “It’s believed to be the first time that the department explicitly referred to the use of generative AI to support application processing in immigration refusals,” reports the Toronto Star. “The disclaimer also noted that all generated content was verified by an officer and that generative AI was not used to make or recommend a decision.”
The applicant’s lawyer was shocked “how any human being could make this decision.” “Somehow, it hallucinated my client’s job description,” he said. “I would love to see what the officer saw. Something seriously went wrong here.”
The applicant’s refusal came just as Canada’s Immigration Department released its first AI strategy, which frames artificial intelligence as a way to improve efficiency, service delivery, and program integrity. The department says it has long used digital tools like analytics and automation to flag fraud risks and triage applications, and is now also experimenting with generative AI for tasks such as research, summarizing, and analysis. In this case, however, the department insisted the decision was made by a human officer and that generative AI was not involved in the final decision.
The Immigration, Refugees and Citizenship Canada has rejected the permanent residence application of a Nigerian, Kémy Adé’, after AI reviewer showed that the job duties submitted by the applicant did not match the Canadian work experience she claimed.
The IRCC’s decision followed a generative AI-assisted review which cited Ms Ade’s job duties to include wiring and assembling control circuits, building control and robot panels, programming and troubleshooting, adding they had no relation to her actual tasks.
Ms Adé was shocked by the refusal letter from the IRCC, Toronto Star stated in a report on Tuesday.
“Well, no, they didn’t. Adé is a post-doctoral research fellow and guest teacher at McMaster University and those skills are not part of her repertoire. Nor are they what she submitted in her immigration application a year ago,” the report said.
Reacting to the development, Ms Adé, an health scientist from France, with a PhD from Sorbonne University in the immunology of aging, said she was totally disoriented by the decision.
“I saw this language about this job description that has nothing to do with me,” she stated. “I was disoriented how this could happen.”
However, the report noted that was the first time the IRCC explicitly referred to the use of generative AI to support application processing in immigration refusals, citing a disclaimer at the bottom of the refusal letter.
Critics insisted the method might lead to chaos and loss of public confidence in the system.
Lauding the IRCC for disclosing the method used in the application review, they expressed concerns over the use of generative AI in assessing economic immigration applications which they described as often more sophisticated and nuanced.
A Toronto immigration lawyer and co-founder of AI Monitor for Immigration in Canada and Internationally, Zeynab Ziaie, stated, “Challenge is that it’s a black box. Remember how when you put stuff into ChatGPT, it hallucinates.”
“You give it a prompt and it can use its large language models to create that response for you and build on what your prompt is to give you a refusal letter. Or it could give you on the same prompt an acceptance. The challenge is it’s a black box, because you don’t know exactly how it’s going to get to its final determination,” she added
Ms Adé’s lawyer, Luka Vukelic, also expressed concerns about the fact that “the generated content” of his client’s refusal had been verified by a human officer, alleging something was wrong with the decision.
“I cannot comprehend how any human being could make this decision. Somehow, it hallucinated my client’s job description. I would love to see what the officer saw. Something seriously went wrong here,” Mr Vukelic stated.
According to the report, Ms Adé’s refusal letter came in late February just as the IRCC published its first AI strategy, outlining how officials would use Artificial Intelligence “to boost efficiency, enhance service delivery and strengthen program integrity.”
The IRCC said officials had been using digital tools including advanced analytics and automation since 2013 to facilitate “faster and more consistent” services and assist in program integrity. These tools help flag indicators of fraud and triage applications, among other tasks.
In its response to Ms Ade’s case, the IRCC stated that the decision was made by a human officer with generative AI playing no role in the decision-making process.
Meanwhile, the applicant’s lawyer requested that the IRCC reconsider her refusal, prompting the reopening of the file.
NEWS NOW:
- Finally, Taiwo Oyedele admits to manual errors in Nigeria’s new tax laws
- We have executed visa bans on religious freedom violators in Nigeria, says US principal adviser
- Nigerian migrant who failed UK driving tests jailed after killing pensioner in car crash
- ‘Not welcome in America’– U.S. implements visa bans for Nigerian religious freedom violators
