Skip to Content

Tiny AI

April 2, 2020
JULIA DUFOSSÉ

Tiny AI

  • Why it matters

    Our devices no longer need to talk to the cloud for us to benefit from the latest AI-driven features.
  • Key players

    Google, IBM, Apple, Amazon
  • Availability

    Now

We can now run powerful AI algorithms on our phones.

AI has a problem: in the quest to build more powerful algorithms, researchers are using ever greater amounts of data and computing power, and relying on centralized cloud services. This not only generates alarming amounts of carbon emissions but also limits the speed and privacy of AI applications.

But a countertrend of tiny AI is changing that. Tech giants and academic researchers are working on new algorithms to shrink existing deep-learning models without losing their capabilities. Meanwhile, an emerging generation of specialized AI chips promises to pack more computational power into tighter physical spaces, and train and run AI on far less energy.

These advances are just starting to become available to consumers. Last May, Google announced that it can now run Google Assistant on users’ phones without sending requests to a remote server. As of iOS 13, Apple runs Siri’s speech recognition capabilities and its QuickType keyboard locally on the iPhone. IBM and Amazon now also offer developer platforms for making and deploying tiny AI.

All this could bring about many benefits. Existing services like voice assistants, autocorrect, and digital cameras will get better and faster without having to ping the cloud every time they need access to a deep-learning model. Tiny AI will also make new applications possible, like mobile-based medical-image analysis or self-driving cars with faster reaction times. Finally, localized AI is better for privacy, since your data no longer needs to leave your device to improve a service or a feature.

But as the benefits of AI become distributed, so will all its challenges. It could become harder to combat surveillance systems or deepfake videos, for example, and discriminatory algorithms could also proliferate. Researchers, engineers, and policymakers need to work together now to develop technical and policy checks on these potential harms.

Keep Reading

Most Popular

Geoffrey Hinton tells us why he’s now scared of the tech he helped build

“I have suddenly switched my views on whether these things are going to be more intelligent than us.”

ChatGPT is going to change education, not destroy it

The narrative around cheating students doesn’t tell the whole story. Meet the teachers who think generative AI could actually make learning better.

Meet the people who use Notion to plan their whole lives

The workplace tool’s appeal extends far beyond organizing work projects. Many users find it’s just as useful for managing their free time.

Learning to code isn’t enough

Historically, learn-to-code efforts have provided opportunities for the few, but new efforts are aiming to be inclusive.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.