Google cofounder Sergey Brin warns the AI boom isn’t all good
Alphabet is going all in on AI. But in his company’s annual founders’ letter, Sergey Brin says there are also hazards that need to be addressed in this current “renaissance.”
The good: Brin pointed out that when Google launched 20 years ago, neural networks were a nearly forgotten technology that experts had given up on. Now the tech giant uses them to do everything from understand what’s in photos to discover exoplanets.
The bad: The downsides are the usual suspects: automation, fairness, and safety issues. To remain a leader in, as it says, the “ethical evolution of the field,” Alphabet takes part (and funds) initiatives like DeepMind Ethics & Society and Partnership on AI.
Good luck: Finding a balance between ethical uses and money-making ventures won’t be easy. Last month, a leak revealed that Google worked with the Pentagon on computer vision software for drones, and thousands of employees responded by signing a letter protesting the project.
Deep Dive
Artificial intelligence
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
The viral AI avatar app Lensa undressed me—without my consent
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
Roomba testers feel misled after intimate images ended up on Facebook
An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.
How to spot AI-generated text
The internet is increasingly awash with text written by AI software. We need new tools to detect it.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.