The news: The UK Home Office has said it will stop using an algorithm to process visa applications that critics claim is racially biased. Opponents to it argue that the algorithm’s use of nationality to decide which applications get fast-tracked has led to a system in which “people from rich white countries get “Speedy Boarding”; poorer people of color get pushed to the back of the queue.”
Time for a redesign: The Home Office denies that its system is racially biased and litigation is still ongoing. Even so, the Home Office has agreed to drop the algorithm and plans to relaunch a redesigned version later this year, after conducting a full review that will look for unconscious bias. In the meantime the UK will adopt a temporary system that does not use nationality to sort applications.
Traffic system: Since 2015 the UK has filtered visa applications using a traffic light system that assigns a red, amber or green risk level to each applicant. People assigned a red risk level were more likely to be refused.
Broader trend: Algorithms are known to entrench institutional biases, especially racist ones. Yet they are being used more and more to help make important decisions, from credit checks to visa applications to pretrial hearings and policing. Critics have complained that the US immigration system is racially biased too. But in most cases, unpacking exactly how these algorithms work and exposing evidence of their bias is hard because many are proprietary and their use has little public oversight.
But criticism is growing. In the US, some police departments are suspending controversial predictive algorithms and tech companies have stopped supplying biased face recognition technology. In February a Dutch court ruled that a system that predicted how likely a person was to commit welfare or tax fraud was unlawful because it unfairly targeted minorities. The UK Home Office’s decision to review its system without waiting for a legal ruling could prove to be a milestone.
AI for everything: 10 Breakthrough Technologies 2024
Generative AI tools like ChatGPT reached mass adoption in record time, and reset the course of an entire industry.
What’s next for AI in 2024
Our writers look at the four hot trends to watch out for this year
OpenAI teases an amazing new generative video model called Sora
The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.
Google’s Gemini is now in everything. Here’s how you can try it out.
Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.