AI is learning when it should and shouldn’t defer to a human
The context: Studies show that when people and AI systems work together, they can outperform either one acting alone. Medical diagnostic systems are often checked over by human doctors, and content moderation systems filter what they can before requiring human assistance. But algorithms are rarely designed to optimize for this AI-to-human handover. If they were, the AI system would only defer to its human counterpart if the person could actually make a better decision.
The research: Researchers at MIT’s Computer Science and AI Laboratory (CSAIL) have now developed an AI system to do this kind of optimization based on strengths and weaknesses of the human collaborator. It uses two separate machine-learning models; one makes the actual decision, whether that’s diagnosing a patient or removing a social media post, and one predicts whether the AI or human is the better decision maker.
The latter model, which the researchers call “the rejector,” iteratively improves its predictions based on each decision maker’s track record over time. It can also take into account factors beyond performance, including a person’s time constraints or a doctor’s access to sensitive patient information not available to the AI system.
The results: The researchers tested the hybrid human-AI approach in a variety of scenarios, including for image recognition tasks and for hate speech detection. The AI system was able to adapt to the expert’s behavior and defer to them when appropriate, allowing the two decision makers to quickly reach a combined level of accuracy higher than a previous hybrid human-AI approach.
Case study: While these experiments are still relatively simple, the researchers believe such an approach could eventually be applied to complex decisions in health care and elsewhere. Consider an AI system that helps doctors prescribe the right antibiotic. While broad spectrum antibiotics are highly effective, their overuse can lead to antibiotic resistance. Specific antibiotics, on the other hand, avoid that problem but should only be used if they have a high chance of working. Given this trade-off, the AI system could learn to adapt to various doctors with different biases in their prescriptions, and correct for tendencies to over or under-prescribe broad-spectrum antibiotics.
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
AI is dreaming up drugs that no one has ever seen. Now we’ve got to see if they work.
AI automation throughout the drug development pipeline is opening up the possibility of faster, cheaper pharmaceuticals.
The original startup behind Stable Diffusion has launched a generative AI for video
Runway’s new model, called Gen-1, can change the visual style of existing videos and movies.
GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why
We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.