Finding Images
Searching for images on the Internet can be hit or miss. That’s because most image searches rely on metadata (text associated with the images, such as file names or dates), and metadata can be incomplete–if it’s there at all. Software that analyzes the images themselves has been notoriously unreliable. But it could get a boost from a technology developed at the University of California, San Diego.

The technology is based on existing systems that learn to describe pictured objects in terms of features like color, texture, and lines by practicing on pictures in a database of known objects. The UCSD system adds a new twist: it assigns each image a likelihood of belonging to categories such as “sky,” “mountain,” or “people.” Then it uses those words to label parts of the pictures. The technique is 40 percent more accurate than typical content-based image-search methods, says Nuno Vasconcelos, a UCSD professor.
Keep Reading
Most Popular
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
What to know about this autumn’s covid vaccines
New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.
Human-plus-AI solutions mitigate security threats
With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure
Next slide, please: A brief history of the corporate presentation
From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.