Finding Images
Searching for images on the Internet can be hit or miss. That’s because most image searches rely on metadata (text associated with the images, such as file names or dates), and metadata can be incomplete–if it’s there at all. Software that analyzes the images themselves has been notoriously unreliable. But it could get a boost from a technology developed at the University of California, San Diego.

The technology is based on existing systems that learn to describe pictured objects in terms of features like color, texture, and lines by practicing on pictures in a database of known objects. The UCSD system adds a new twist: it assigns each image a likelihood of belonging to categories such as “sky,” “mountain,” or “people.” Then it uses those words to label parts of the pictures. The technique is 40 percent more accurate than typical content-based image-search methods, says Nuno Vasconcelos, a UCSD professor.
Keep Reading
Most Popular
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
How Rust went from a side project to the world’s most-loved programming language
For decades, coders wrote critical systems in C and C++. Now they turn to Rust.
Design thinking was supposed to fix the world. Where did it go wrong?
An approach that promised to democratize design may have done the opposite.
Sam Altman invested $180 million into a company trying to delay death
Can anti-aging breakthroughs add 10 healthy years to the human life span? The CEO of OpenAI is paying to find out.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.