Skip to Content
The Download

The Download: Europe vs Chinese EVs, and making AI vision less biased

Plus: Getty is confident its AI won't run into trouble with copyright

September 26, 2023

This is today's edition of The Download, our weekday newsletter that provides a daily dose of what's going on in the world of technology.

Europe is working to slow down the global expansion of Chinese EVs

Earlier this month, the European Commission announced it is launching an anti-subsidy investigation into electric vehicles coming from China. 

The move has long been in the making. The rapid recent growth in popularity of Chinese-made electric vehicles in Europe has raised alarms for the domestic automobile industry on the continent. No matter how it shakes out, an official inquiry could hurt the expansion of the Chinese EV business at a critical moment. Read the full story.

—Zeyi Yang

These new tools could make AI vision systems less biased

Computer vision systems are everywhere. They help classify and tag images on social media feeds, detect objects and faces in pictures and videos, and highlight relevant elements of an image. 

However, they are riddled with biases, and they’re less accurate for images of Black or brown people and women. And there’s another problem: the current ways researchers find biases in these systems are themselves biased, sorting people into broad categories that don’t properly account for the complexity that exists among human beings. 

Two new papers by researchers at Sony and Meta propose new ways to measure biases in computer vision systems so as to more fully capture the rich diversity of humanity. Developers could use these tools to check the diversity of their data sets, helping lead to better, more diverse training data for AI. Read the full story.

—Melissa Heikkilä

Getty Images promises its new AI contains no copyrighted art

The news: Getty Images is so confident its new generative AI model is free of copyrighted content that it will cover any potential intellectual-property disputes for its customers. 

The background: The generative AI system, announced yesterday, was built by Nvidia and is trained solely on images in Getty’s image library. It does not include logos or images that have been scraped off the internet without consent, and the company is confident that the creators of the images—and any people that appear in them—have consented to having their art used.

Why it matters: The past year has seen a boom in generative AI systems that produce images and text. But AI companies are embroiled in numerous legal battles over copyrighted content, after prominent artists and authors sued them. Read the full story.

—Melissa Heikkilä

What’s changed since the “pause AI” letter six months ago?

Last week marked six months since the Future of Life Institute (FLI), a nonprofit focusing on existential risks surrounding artificial intelligence, shared an open letter signed by famous people such as Elon Musk, Steve Wozniak, and Yoshua Bengio. 

The letter called for tech companies to “pause” the development of AI language models more powerful than OpenAI’s GPT-4 for six months—which didn’t happen, obviously. 

Melissa Heikkilä, our senior AI reporter, sat down with MIT professor Max Tegmark, the founder and president of FLI, to take stock of what has happened since, and what should happen next. Read the full story.

This story is from The Algorithm, our weekly AI newsletter. Sign up to receive it in your inbox every Monday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Here’s what’s lurking inside Meta’s AI database
A whole lot of Shakespeare, erotica, and, err, horror written for children. (The Atlantic $)
+ Meta’s latest AI model is free for all. (MIT Technology Review)

2 Hollywood’s writers’ strike may be nearing its end
A tentative agreement has been reached, though AI is still a sticking point. (Insider $)
+ It’ll still take plenty of time to get your favorite shows back on air, though. (Engadget)

3 FBI agents haven’t been trained to use facial recognition properly
But that’s not stopping the bureau from using it anyway. (Wired $)
+ A TikTok account has been doxxing random targets using the tech. (404 Media)
+ The movement to limit face recognition tech might finally get a win. (MIT Technology Review)

4 Making new antibiotics is an expensive business
And plenty of companies have gone bankrupt trying to make it happen. (WSJ $)
+ The future of a US plant that makes drugs for kids is hanging in the balance. (Bloomberg $)

5 A US regulator is combing through Wall Street’s private messages
Bankers are not supposed to use WhatApp and Signal to discuss work matters. (Reuters)

6 To live longer, we need to rid ourselves of old cells
Enter a host of enthusiastic startups ready to rise to the challenge. (Economist $)
+ Can we find ways to live beyond 100? Millionaires are betting on it. (MIT Technology Review)

7 The levels of sea ice in Antarctica has hit a record low
Even experienced scientists say they’re taken aback. (WP $)
+ The Earth could be heading towards forming a grim supercontinent. (The Atlantic $)
+ Unproven tech climate interventions are overhyped. (The Verge)

8 The case against exotic cultivated meat
Tiger steaks may sound intriguing, but they’re a conservational nightmare. (Vox)
+ Lab-grown meat just reached a major milestone. Here’s what comes next. (MIT Technology Review)

9 Taping your mouth shut isn’t that beneficial
Despite what TikTok would have you believe. (The Guardian)

10 Those AI subliminal messages aren’t as sinister as you may think
They’re more likely to be used for ads than coercive mind control. (Motherboard)

Quote of the day

“Sam will never speak an untruth.”

—Barbara Fried, mother of the disgraced FTX founder Sam Bankman-Fried, insists to the New Yorker that her son is incapable of dishonesty.

The big story

How Facebook got addicted to spreading misinformation

March 2021

When the Cambridge Analytica scandal broke in March 2018, it would kick off Facebook’s largest publicity crisis to date. It compounded fears that the algorithms that determine what people see were amplifying fake news and hate speech, and prompted the company to start a team with a directive that was a little vague: to examine the societal impact of the company’s algorithms.

Joaquin Quiñonero Candela was a natural pick to head it up. In his six years at Facebook, he’d created some of the first algorithms for targeting users with content precisely tailored to their interests, and then he’d diffused those algorithms across the company. Now his mandate would be to make them less harmful. However, his hands were tied, and the drive to make money came first. Read the full story.

—Karen Hao

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet 'em at me.)

+ I can’t work out whether these Little Shop of Horrors cakes are cuter than they are horrifying.
+ The story of how the MIDI musical interface came to be is fascinating.
+ Souvenirs are more than just tourist tat: they remind us about the holiday stories we want to tell about ourselves.
+ It’s time to start planning those late fall vacations.
+ Sit back and dive into the eternal quest for the Golden Owl.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.