Who’s going to save us from bad AI?
Plus: DeepMind does math.
To receive The Algorithm in your inbox every Monday, sign up here.
Welcome to the Algorithm!
About damn time. That was the response from AI policy and ethics wonks to news last week that the Office of Science and Technology Policy, the White House’s science and technology advisory agency, had unveiled an AI Bill of Rights. The document is Biden’s vision of how the US government, technology companies, and citizens should work together to hold the AI sector accountable.
It’s a great initiative, and long overdue. The US has so far been one of the only Western nations without clear guidance on how to protect its citizens against AI harms. (As a reminder, these harms include wrongful arrests, suicides, and entire cohorts of schoolchildren being marked unjustly by an algorithm. And that’s just for starters.)
Tech companies say they want to mitigate these sorts of harms, but it’s really hard to hold them to account.
The AI Bill of Rights outlines five protections Americans should have in the AI age, including data privacy, the right to be protected from unsafe systems, and assurances that algorithms shouldn’t be discriminatory and that there will always be a human alternative. Read more about it here.
So here’s the good news: The White House has demonstrated mature thinking about different kinds of AI harms, and this should filter down to how the federal government thinks about technology risks more broadly. The EU is pressing on with regulations that ambitiously try to mitigate all AI harms. That’s great but incredibly hard to do, and it could take years before their AI law, called the AI Act, is ready. The US, on the other hand, “can tackle one problem at a time,” and individual agencies can learn to handle AI challenges as they arise, says Alex Engler, who researches AI governance at the Brookings Institution, a DC think tank.
And the bad: The AI Bill of Rights is missing some pretty important areas of harm, such as law enforcement and worker surveillance. And unlike the actual US Bill of Rights, the AI Bill of Rights is more an enthusiastic recommendation than a binding law. “Principles are frankly not enough,” says Courtney Radsch, US tech policy expert for the human rights organization Article 19. “In the absence of, for example, a national privacy law that sets some boundaries, it’s only going part of the way,” she adds.
The US is walking on a tightrope. On the one hand, America doesn’t want to seem weak on the global stage when it comes to this issue. The US plays perhaps the most important role in AI harm mitigation, since most of the world’s biggest and richest AI companies are American. But that’s the problem. Globally, the US has to lobby against rules that would set limits on its tech giants, and domestically it’s loath to introduce any regulation that could potentially “hinder innovation.”
The next two years will be critical for global AI policy. If the Democrats don’t win a second term in the 2024 presidential election, it is very possible that these efforts will be abandoned. New people with new priorities might drastically change the progress made so far, or take things in a completely different direction. Nothing is impossible.
DeepMind’s game-playing AI has beaten a 50-year-old record in computer science
They’ve done it again: AI lab DeepMind has used its board-game playing AI AlphaZero to discover a faster way to solve a fundamental math problem in computer science, beating a record that has stood for more than 50 years.
The researchers trained a new version of AlphaZero, called AlphaTensor, to play a game that learned the best series of steps to solve the math problem. It was rewarded for winning the game in as few moves as possible.
Why this is a big deal: The problem, matrix multiplication, is a crucial type of calculation at the heart of many different applications, from displaying images on a screen to simulating complex physics. It is also fundamental to machine learning itself. Speeding up this calculation could have a big impact on thousands of everyday computer tasks, cutting costs and saving energy. Read more from my colleague Will Heaven here.
Bits and Bytes
Google released an impressive text-to-video AI
Just a week after Meta’s text-to-image AI reveal, Google has upped the ante. The videos that its system Imagen Video produces are of much higher definition than Meta’s. But, like Meta, Google is not releasing its model into the wild, because of “social biases and stereotypes which are challenging to detect and filter.” (Google)
Google’s new AI can hear a snippet of a song—and then keep on playing
The technique, called AudioLM, generates naturalistic sounds without the need for human annotation. (MIT Technology Review)
Even after $100 billion, self-driving cars are going nowhere
What a quote from Anthony Levandowski, one of the field’s biggest stars: "Forget about profits—what’s the combined revenue of all the [AV] companies? Is it a million dollars? Maybe. I think it’s more like zero." (Bloomberg Businessweek)
Robotics companies have pledged not to weaponize their tech
Six of the largest robotics companies in the world, including Boston Dynamics, have pledged not to weaponize their robots. (Unless, of course, it is for governments’ defense purposes.)
Meanwhile, defense AI startup Anduril says it has developed loitering munitions, also known as suicide drones, and this is apparently just the start of its new weapons program. I wrote last summer about how business is booming for military AI startups. The invasion of Ukraine has prompted militaries to update their arsenals—and Silicon Valley stands to capitalize. (MIT Technology Review)
This is life in the Metaverse
A fun story about life in the Metaverse and its early adopters. This is the first Metaverse story where I could kinda see the appeal of it. (But didn't make me want to plug and play anytime soon.) (The New York Times)
There’s a new AI that allows you to create interiors
The model was built in five days using the open-source text-to-image model Stable Diffusion to generate snazzy interiors. It’s great to see people using the model to build new applications. On the downside, I can totally see this tech being used for Airbnb and real estate scams. (InteriorAI)
See you next time,
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
AI is dreaming up drugs that no one has ever seen. Now we’ve got to see if they work.
AI automation throughout the drug development pipeline is opening up the possibility of faster, cheaper pharmaceuticals.
The original startup behind Stable Diffusion has launched a generative AI for video
Runway’s new model, called Gen-1, can change the visual style of existing videos and movies.
GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why
We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.