Skip to Content
Artificial intelligence

Why Facebook wants to design its own AI chips

April 19, 2018

By following the lead of Google and Apple, the company could build processors that speed up its many AI algorithms or even power new hardware products.

The news: Bloomberg reports that Facebook is hoping to hire someone to build out an “end-to-end" chip development organization at the firm. The job listing specifically mentions application-specific integrated circuits (ASICS), which are built to perform very particular tasks, such as facial recognition, as efficiently as possible. It also mentions system-on-a-chip (SoC) hardware, which is often used in mobile products or small devices—the likes of which Facebook could put inside its Oculus VR headset or a (currently delayed) smart speaker. 

Why it matters: Chips are a multibillion-dollar business, and cutting out a middleman, like Intel, helps save a lot of money. Plus, as Moore’s Law grinds to a halt, it’s getting harder and harder to find speed gains in general-purpose chips—so designing new ones for very specific, in-house tasks helps firms boost performance. Facebook will hope it can cut costs and make speed improvements.

Joining the ranks: More and more tech companies are manufacturing chips, threatening the business of companies like Qualcomm, Intel, and Nvidia. Google is building AI chips to power its data centers and open-source software. Apple keeps developing new chips to run its mobile products. And Microsoft just announced its latest chip, built for IoT products.

Deep Dive

Artificial intelligence

The inside story of how ChatGPT was built from the people who made it

Exclusive conversations that take us behind the scenes of a cultural phenomenon.

AI is dreaming up drugs that no one has ever seen. Now we’ve got to see if they work.

AI automation throughout the drug development pipeline is opening up the possibility of faster, cheaper pharmaceuticals.

GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why

We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.

The original startup behind Stable Diffusion has launched a generative AI for video

Runway’s new model, called Gen-1, can change the visual style of existing videos and movies.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.