MIT Technology Review Subscribe

The UK is dropping an immigration algorithm that critics say is racist

The system for processing visa applications is the target of an ongoing lawsuit by civil rights campaigners.

The news: The UK Home Office has said it will stop using an algorithm to process visa applications that critics claim is racially biased. Opponents to it argue that the algorithm’s use of nationality to decide which applications get fast-tracked has led to a system in which “people from rich white countries get “Speedy Boarding”; poorer people of color get pushed to the back of the queue.”

Time for a redesign: The Home Office denies that its system is racially biased and litigation is still ongoing. Even so, the Home Office has agreed to drop the algorithm and plans to relaunch a redesigned version later this year, after conducting a full review that will look for unconscious bias. In the meantime the UK will adopt a temporary system that does not use nationality to sort applications. 

Advertisement

Traffic system: Since 2015 the UK has filtered visa applications using a traffic light system that assigns a red, amber or green risk level to each applicant. People assigned a red risk level were more likely to be refused.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

Broader trend: Algorithms are known to entrench institutional biases, especially racist ones. Yet they are being used more and more to help make important decisions, from credit checks to visa applications to pretrial hearings and policing. Critics have complained that the US immigration system is racially biased too. But in most cases, unpacking exactly how these algorithms work and exposing evidence of their bias is hard because many are proprietary and their use has little public oversight. 

But criticism is growing. In the US, some police departments are suspending controversial predictive algorithms and tech companies have stopped supplying biased face recognition technology. In February a Dutch court ruled that a system that predicted how likely a person was to commit welfare or tax fraud was unlawful because it unfairly targeted minorities. The UK Home Office’s decision to review its system without waiting for a legal ruling could prove to be a milestone.

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement