Migration and Diversity
Artificial Intelligence (AI) is now being used more often to support various aspects of migration and mobility systems, including visa processing and decision-making; border security; settlement and support services; and return migration management. In the context of migration and mobility, the consequences of biased AI algorithms can be life-changing for visa applicants. The following text is an abridged excerpt from Chapter 11: Artificial Intelligence, Migration and Mobility: Implications for Policy and Practice from the World Migration Report 2022.
Biases in the AI algorithms used in visa and asylum processing
As Artificial Intelligence (AI) systems become increasingly common throughout the migration cycle, they give rise to a variety of issues and pose significant challenges for the protection of migrants’ human rights.
AI technologies are frequently used for visa and asylum processing and decision-making. A key advantage of using AI systems is that they can speed up visa and asylum application processing while screening for security threats and reducing irregular migration. However, AI technologies make it possible to automate, often in non-transparent ways, large-volume processing involving risk profiling, with limited transparency and often without the possibility of recourse.
The lack of transparency and the presence of biases in AI algorithms is a widespread concern, extending well beyond migration. While humans also display biases in their decision-making independently of the use of AI, AI systems can amplify existing human biases, not just encode them. AI thus has the potential to institutionalize and systematize human bias. This can ultimately lead to discrimination and exclusion of people based on protected characteristics, including race and ethnicity. Bias is a common issue that permeates AI systems in a variety of sectors.
In the context of migration and mobility, the consequences of biased AI algorithms can be life-changing. For example, there is potential for visa applications to be rejected because the AI algorithms used for the initial triage do not correctly recognize darker skin complexions and misidentify applicants. Such a scenario is not far from reality. Facial recognition technologies are considerably less accurate when used to recognize darker-skinned female faces when compared with white male faces. Commercially available facial recognition AI systems were also proven to be more prone to misidentifying black people’s faces and matching them with faces of people who had previously been arrested by the police, in an investigation in the United States.
These inaccuracies in identifying darker-skinned people’s faces may be caused by a representation bias, due, for example, to a lack of diversity in the data sets used to train the AI algorithms. This effect may also be the result of a historical bias, reflecting decades of preconceptions and stereotypes in society. Technology is indeed shaped by long-standing cultural and context-based perceptions about race, ethnicity, gender and other inequalities prevalent in society.
These illustrations are an important reminder that technology is not a neutral tool and that it can also make mistakes. Decision makers should be aware of this. They should also take into consideration the propensity of human beings to favour the suggestions presented by AI systems, even if there are indications that these are mistaken, a phenomenon known as automation bias.
Questions
In what ways does the use of AI in the processing of visa applications pose challenges for the protection of migrants’ human rights?
According to the text, what are some examples of biases and stereotypes that can be encoded in AI systems? What consequence can this have for visa applicants?
What can be done to reduce these biases?
Can technology make mistakes? Find examples from the text to support your answer.