Skip to Content
Uncategorized

Self-Taught Software

Google’s image recognition software improves search
August 21, 2012

Source: “Building High-level Features Using Large Scale Unsupervised Learning”

Platonic ideal: This composite image represents the ideal stimulus for Google’s software to recognize a cat face.

Quoc Le et al.

International Conference on Machine Learning, Edinburgh, U.K., June 26–July 1, 2012

Results: Researchers at Google developed software, modeled on the way biological neurons interact with each other, that taught itself to distinguish objects in ­YouTube videos. Although it was most effective at cats and human faces, the system could recognize 3,200 items in all, a 70 percent improvement over the previous best-performing software.

Why it matters: The approach could help image recognition technology identify a much wider range of objects than it can now. That could make image search engines more powerful or help make robots better at interpreting their surroundings.

Methods: Previous image recognition software learned to recognize specific objects by being shown examples labeled by humans, such as a series of images with faces marked. Google’s system doesn’t need labeled examples and can learn from any image, which means the objects it recognizes aren’t limited to a small number of domains in which it has been trained. The software finds patterns in images and sorts them into categories of objects, in part by brute force: 1,000 computers worked together to sort through 10 million images from YouTube, harnessing much more processing power than is typical for image recognition systems.

Next steps: Google has moved the project out of its research division and into the part of the business responsible for search. The new techniques could be used to improve speech recognition, translation, and image search technology.

Speedier Data Storage

Improved phase-change devices could replace all forms of computer memory

Source: “Breaking the Speed Limits of Phase-Change Memory”

Shi Luping et al.

Science 336: 1566–1569

Results: Researchers at the Data Storage Institute of the Agency of Science, Technology, and Research in Singapore and the University of Cambridge, U.K., created a version of phase-change memory that operates an order of magnitude faster than any before, flipping from a digital 0 to a 1 in just 500 picoseconds (500 trillionths of a second). It’s approximately 1,000 times faster than the type of memory it’s meant to replace.

Why it matters: Phase-change memory is a leading candidate to replace the flash memory used in memory cards, mobile devices, and newer laptop computers, because it can store data more densely and at a faster rate. The new speed record suggests it could even be fast enough to take the place of the short-term memory in computers, known as DRAM.

Methods: Phase-change memory represents digital 1s and 0s by using an electric current to flip a metallic alloy between crystalline and disordered forms. Crystal growth is affected by temperature, so the researchers used a weak electric field to preheat the memory cells, enabling them to become crystalline more quickly when necessary. Tests that repeated the process 10,000 times showed that the new approach did not reduce the performance of a phase-change memory cell over time. Although the preheating technique means the memory consumes more energy, the researchers say it doesn’t use much more than a conventional design.

Next steps: The researchers intend to investigate whether changes to the phase-change material or to the way the cells are preheated will deliver even greater speed increases.

Keep Reading

Most Popular

Scientists are finding signals of long covid in blood. They could lead to new treatments.

Faults in a certain part of the immune system might be at the root of some long covid cases, new research suggests.

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.