This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
If regulators don’t act now, the generative AI boom will concentrate Big Tech’s power even further. That’s the central argument of a new report from research institute AI Now. And it makes sense. To understand why, consider that the current AI boom depends on two things: large amounts of data, and enough computing power to process it.
Both of these resources are only really available to big companies. And although some of the most exciting applications, such as OpenAI’s chatbot ChatGPT and Stability.AI’s image-generation AI Stable Diffusion, are created by startups, they rely on deals with Big Tech that gives them access to its vast data and computing resources.
“A couple of big tech firms are poised to consolidate power through AI rather than democratize it,” says Sarah Myers West, managing director of the AI Now Institute, a research nonprofit.
Right now, Big Tech has a chokehold on AI. But Myers West believes we’re actually at a watershed moment. It’s the start of a new tech hype cycle, and that means lawmakers and regulators have a unique opportunity to ensure that the next decade of AI technology is more democratic and fair.
What separates this tech boom from previous ones is that we have a better understanding of all the catastrophic ways AI can go awry. And regulators everywhere are paying close attention.
China just unveiled a draft bill on generative AI calling for more transparency and oversight, while the European Union is negotiating the AI Act, which will require tech companies to be more transparent about how generative AI systems work. It’s also planning a bill to make them liable for AI harms.
The US has traditionally been reluctant to regulate its tech sector. But that’s changing. The Biden administration is seeking input on ways to oversee AI models such as ChatGPT—for example, by requiring tech companies to produce audits and impact assessments, or by mandating that AI systems meet certain standards before they are released. It’s one of the most concrete steps the administration has taken to curb AI harms.
Meanwhile, Federal Trade Commission chair Lina Khan has also highlighted Big Tech’s advantage in data and computing power and vowed to ensure competition in the AI industry. The agency has dangled the threat of antitrust investigations and crackdowns on deceptive business practices.
This new focus on the AI sector is partly influenced by the fact that many members of the AI Now Institute, including Myers West, have spent time at the FTC.
Myers West says her stint taught her that AI regulation doesn’t have to start from a blank slate. Instead of waiting for AI-specific regulations such as the EU’s AI Act, which will take years to put into place, regulators should ramp up enforcement of existing data protection and competition laws.
Because AI as we know it today is largely dependent on massive amounts of data, data policy is also artificial-intelligence policy, says Myers West.
Case in point: ChatGPT has faced intense scrutiny from European and Canadian data protection authorities, and it has been blocked in Italy for allegedly scraping personal data off the web illegally and misusing personal data.
The call for regulation is not just coming from government officials. Something interesting has happened. After decades of fighting regulation tooth and nail, today most tech companies, including OpenAI, claim they welcome it.
The big question everyone’s still fighting over is how AI should be regulated. Though tech companies claim they support regulation, they’re still pursuing a “release first, ask question later” approach when it comes to launching AI-powered products. They are rushing to release image- and text-generating AI models as products even though these models have major flaws: they make up nonsense, perpetuate harmful biases, infringe copyright, and contain security vulnerabilities.
The White House’s proposal to tackle AI accountability with post-AI product launch measures such as algorithmic audits is not enough to mitigate AI harms, AI Now’s report argues. Stronger, swifter action is needed to ensure that companies first prove their models are fit for release, Myers West says.
“We should be very wary of approaches that do not put the burden on companies. There are a lot of approaches to regulation that essentially put the onus on the broader public and on regulators to root out AI-enabled harms,” she says.
And importantly, Myers West says, regulators need to take action swiftly.
“There need to be consequences for when [tech companies] violate the law.”
How AI is helping historians better understand our past
This is cool. Historians have started using machine learning to examine historical documents smudged by centuries spent in mildewed archives. They’re using these techniques to restore ancient texts, and making significant discoveries along the way.
Connecting the dots: Historians say the application of modern computer science to the distant past helps draw broader connections across the centuries than would otherwise be possible. But there is a risk that these computer programs introduce distortions of their own, slipping bias or outright falsifications into the historical record. Read more from Moira Donovan here.
Bits and bytes
Google is overhauling Search to compete with AI rivals
Threatened by Microsoft’s relative success with AI-powered Bing search, Google is building a new search engine that uses large language models, and upgrading its existing search engine with AI features. It hopes the new search engine will offer users a more personalized experience. (The New York Times)
Elon Musk has created a new AI company to rival OpenAI
Over the past few months, Musk has been trying to hire researchers to join his new AI venture, X.AI. Musk was one of OpenAI’s cofounders, but he was ousted in 2018 after a power struggle with CEO Sam Altman. Musk has accused OpenAI’s chatbot ChatGPT of being politically biased and says he wants to create “truth-seeking” AI models. What does that mean? Your guess is as good as mine. (The Wall Street Journal)
Stability.AI is at risk of going under
Stability.AI, the creator of the open-source image-generating AI model Stable Diffusion, just released a new version of the model whose results are slightly more photorealistic. But the business is in trouble. It’s burning through cash fast and struggling to generate revenue, and staff are losing faith in the CEO. (Semafor)
Meet the world’s worst AI program
The bot on Chess.com, depicted as a turtleneck-wearing Bulgarian man with bushy eyebrows, a thick beard, and a slightly receding hairline, is designed to be absolutely awful at chess. While other AI bots are programmed to dazzle, Martin is a reminder that even dumb AI systems can still surprise, delight, and teach us. (The Atlantic)
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
Deepfakes of Chinese influencers are livestreaming 24/7
With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
AI hype is built on high test scores. Those tests are flawed.
With hopes and fears about the technology running wild, it's time to agree on what it can and can't do.
You need to talk to your kid about AI. Here are 6 things you should say.
As children start back at school this week, it’s not just ChatGPT you need to be thinking about.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.