Skip to Content
Uncategorized

Google’s First Mobile Chip Will Turbocharge Image Processing

October 17, 2017

And it’s already inside its latest smartphone, the Pixel 2. The Verge reports that Google’s new Pixel Visual Core chipset is designed to make image processing faster and smoother. It has eight processor cores that are meant to make HDR+ image processing—which increases dynamic range, reduces noise, and improves colors in pictures—five times faster than the same operations performed on the Pixel 2's main CPU, while using just a tenth as much energy. It's not clear why Google didn't announce the chip when it launched the phone earlier this month.

Perhaps more interesting, though, is what the silicon could be used for in the future. Google tells Ars Technica that the Pixel Visual Core is designed "to handle the most challenging imaging and machine learning applications," and that more applications for the hardware will be made available over time. That, along with the impressive overall speed and efficiency, suggests that Google may have designed the system in order to give resource-heavy machine-learning tasks, like image recognition or AI-powered picture retouching, a shot in the arm.

Currently, many AI features on smartphones have to be outsourced to algorithms running on the cloud. But specialized chips for mobile devices—an increasing trend, with dedicated AI chips also appearing in Apple’s iPhone X and Huawei’s new Mate 10—and smart ways to shrink down AI algorithms will make it possible to do more intelligent processing right on the devices. That will make mobile AI not only less laggy but more secure.

Keep Reading

Most Popular

Scientists are finding signals of long covid in blood. They could lead to new treatments.

Faults in a certain part of the immune system might be at the root of some long covid cases, new research suggests.

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.