Seven Must-Read Stories from the Past Week (May 11-17)
Another chance to catch the most interesting, and important, articles from the previous week on MIT Technology Review.
- Treading Carefully, Google Encourages Developers to Hack Glass
Breaking its own restrictions, Google will show developers how to build any kind of app for Google Glass at its I/O conference. - New Kind of LED Could Mean Better Google-Glass-Like Displays
Micro-display LED tech could light up the next generation of face-wearable gadgets. - It’s Time to Talk about the Burgeoning Robot Middle Class
A prominent roboticist asks: How will a mass influx of robots affect human employment? - How to Mine Cell-Phone Data Without Invading Your Privacy
Researchers use phone records to build a mobility model of the Los Angeles and New York City regions with new privacy guarantees. - What It’s Like to See Again with an Artificial Retina
Artificial retinas give the blind only the barest sense of what’s visible, but researchers are working hard to improve that. - Can Carbon Capture Clean Up Canada’s Oil Sands?
Alberta will serve as a test bed for large-scale carbon capture and sequestration. - The Algorithm That Automatically Detects Polyps in Images from Camera Pills
Analyzing the footage from camera pills is a time-consuming task for medical professionals. Now computer scientists are attempting to automate the process. <
Keep Reading
Most Popular
Large language models can do jaw-dropping things. But nobody knows exactly why.
And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.
OpenAI teases an amazing new generative video model called Sora
The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.
Google’s Gemini is now in everything. Here’s how you can try it out.
Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.
This baby with a head camera helped teach an AI how kids learn language
A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.