Alphabet is going all in on AI. But in his company’s annual founders’ letter, Sergey Brin says there are also hazards that need to be addressed in this current “renaissance.”
The good: Brin pointed out that when Google launched 20 years ago, neural networks were a nearly forgotten technology that experts had given up on. Now the tech giant uses them to do everything from understand what’s in photos to discover exoplanets.
The bad: The downsides are the usual suspects: automation, fairness, and safety issues. To remain a leader in, as it says, the “ethical evolution of the field,” Alphabet takes part (and funds) initiatives like DeepMind Ethics & Society and Partnership on AI.
Good luck: Finding a balance between ethical uses and money-making ventures won’t be easy. Last month, a leak revealed that Google worked with the Pentagon on computer vision software for drones, and thousands of employees responded by signing a letter protesting the project.
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
Unpacking the hype around OpenAI’s rumored new Q* model
If OpenAI's new model can solve grade-school math, it could pave the way for more powerful systems.
Generative AI deployment: Strategies for smooth scaling
Our global poll examines key decision points for putting AI to use in the enterprise.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.