Skip to Content
Artificial intelligence

The UK is dropping an immigration algorithm that critics say is racist

The system for processing visa applications is the target of an ongoing lawsuit by civil rights campaigners.
James Cridland / Flickr

The news: The UK Home Office has said it will stop using an algorithm to process visa applications that critics claim is racially biased. Opponents to it argue that the algorithm’s use of nationality to decide which applications get fast-tracked has led to a system in which “people from rich white countries get “Speedy Boarding”; poorer people of color get pushed to the back of the queue.”

Time for a redesign: The Home Office denies that its system is racially biased and litigation is still ongoing. Even so, the Home Office has agreed to drop the algorithm and plans to relaunch a redesigned version later this year, after conducting a full review that will look for unconscious bias. In the meantime the UK will adopt a temporary system that does not use nationality to sort applications. 

Traffic system: Since 2015 the UK has filtered visa applications using a traffic light system that assigns a red, amber or green risk level to each applicant. People assigned a red risk level were more likely to be refused.

Broader trend: Algorithms are known to entrench institutional biases, especially racist ones. Yet they are being used more and more to help make important decisions, from credit checks to visa applications to pretrial hearings and policing. Critics have complained that the US immigration system is racially biased too. But in most cases, unpacking exactly how these algorithms work and exposing evidence of their bias is hard because many are proprietary and their use has little public oversight. 

But criticism is growing. In the US, some police departments are suspending controversial predictive algorithms and tech companies have stopped supplying biased face recognition technology. In February a Dutch court ruled that a system that predicted how likely a person was to commit welfare or tax fraud was unlawful because it unfairly targeted minorities. The UK Home Office’s decision to review its system without waiting for a legal ruling could prove to be a milestone.

Deep Dive

Artificial intelligence

conceptual illustration showing various women's faces being scanned
conceptual illustration showing various women's faces being scanned

A horrifying new AI app swaps women into porn videos with a click

Deepfake researchers have long feared the day this would arrive.

storm front
storm front

DeepMind’s AI predicts almost exactly when and where it’s going to rain

The firm worked with UK weather forecasters to create a model that was better at making short term predictions than existing systems.

People are hiring out their faces to become deepfake-style marketing clones

AI-powered characters based on real people can star in thousands of videos and say anything, in any language.

Tentacle of Octopus
Tentacle of Octopus

What an octopus’s mind can teach us about AI’s ultimate mystery

Machine consciousness has been debated since Turing—and dismissed for being unscientific. Yet it still clouds our thinking about AIs like GPT-3.

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.