Skip to Content

Sponsored

Uncategorized

On-Device Processing and AI Go Hand-in-Hand

As on-device processing becomes more powerful, and AI grows more prevalent, our future will increasingly be defined by the convergence of these two game-changing trends

In partnership withQualcomm

Whether operating an autonomous vehicle, using facial recognition to access your bank account, or keeping your device safe from fast-changing security threats, artificial intelligence (AI) is playing a bigger role in our lives. In the past the power of the cloud was required for this processing, but edge devices, such as smartphones and drones, are now equipped to run compute-intensive AI operations. In fact, in many cases, the edge device is the preferred platform for running AI-powered applications.

The reality is that more AI applications are emerging today than most of us realize. When a device is equipped with AI, it can vastly expand and improve our lives, whether by capturing sharper images and videos, communicating with us more naturally, or perceiving the environment and autonomously navigating us to our destination safely.

“AI is an umbrella term designed to encompass anything that helps a device replicate the human brain,” says Gary Brotman, director of product management for Qualcomm. “Machine learning (ML) is a broad class of techniques and algorithms to solve the problems that make AI possible. The class we focus on is deep learning (DL) and recurrent neural networks (RNN), running on the actual device.”

The significant improvement of AI algorithms and on-device processing, two crucial ingredients for making AI ubiquitous, are leading to more seamless and compelling user experiences. This is particularly true as AI-based functionality moves to vehicles, household devices, and Internet of Things (IoT) sensors. Enhanced perceptive and cognitive capabilities due to the many technologies under the AI umbrella, such as ML, DL, and RNNs, can now run on modern edge devices.

For example, on-device AI can improve image recognition and advanced image processing, such as producing bokeh effects (a soft out-of-focus background) and style transfers. AI-equipped devices can also learn to recognize keywords and voices, improving their response to the consumer and aiding in foreign language translation.

In addition, AI can help devices and apps become more aware of user preferences and surroundings, understand intent, and respond in contextually relevant ways. “AI on your device results in a more contextually rich experience,” says Brotman. “And over time your device will be able to predict and have a deeper understanding of what you’re going to do next.”

AI in Your Hand

On-device AI has several significant benefits. “The first is performance. Processing on the device is just faster—no roundtrip to the cloud,” says Brotman. “Privacy is next. People are comfortable sharing some personal data, but not all of it. And the third is reliability. Mobile networks are pervasive, but there’s no guarantee you’ll always have a connection.”

Performance: Running AI algorithms on the device—independent of the cloud—can greatly improve response time and efficiency, as data doesn’t need to be transferred between the cloud and the device. This is important because mobile AI capabilities tend to be time-sensitive for user experience and decision making. 

“AI apps tend to be real-time and mission-critical,” says Jeff Gehlhaar, vice president of technology for Qualcomm. “Many AI-use cases that enhance an experience can’t afford latency.” 

An autonomous vehicle that needs to apply its brakes, for example, can’t afford even a millisecond of latency that might result from cloud processing. Decisions must be made in a split second for the vehicle to operate safely.

In terms of user experience, a natural voice user interface can only tolerate so much latency. Users are accustomed to immediate responses when using a natural language processing speech interface, and the repercussions of network delays will lead to poor experiences.

Privacy and security: Keeping your data on the device ensures privacy, and AI is also used for biometric authentication using voice, fingerprint, iris, and facial recognition. “Using your face to unlock a device is becoming commonplace,” says Brotman. “And 3D facial recognition is emerging to provide a higher degree of authenticity for enabling mobile payments.”

On-device processing of AI applications can also increase both the device and data security by maintaining a watchful eye for aberrant behavior. “AI can help detect malware and anomalous behavior,” says Gehlhaar. “We can train the neural network to see how bad actors behave. And it can detect those bad behaviors, like asking, ‘Why is my camera application opening my contact database?’

Reliability: Even in the most advanced areas of the world, mobile network coverage is not ubiquitous. When it comes to certain AI-driven capabilities, however, there’s no room for error. Autonomous vehicles simply can’t afford to experience a dropped wireless signal, such as might occur upon entering a tunnel or parking garage. On-device processing, in addition to other redundancy features, will always be a requirement for mission-critical usages like autonomous driving.

Bringing AI to Edge Devices

While these AI functions can now run on the device, the cloud still has a role, particularly as a complement to on-device processing. AI apps still rely on cloud platforms to manage big data and to “train” the neural network models that drive AI inference.

Edge devices themselves must also be equipped to effectively run AI workloads. For example, the processing must happen within the platform’s constraints, including power consumption and thermal limits. Application processors with diverse processing engines are particularly well-suited for efficiently running AI tasks. The Qualcomm Snapdragon Mobile Platform, for example, is equipped with three separate processing engines—a central processing unit (CPU), graphics processing unit (GPU), and digital signal processor (DSP) with vector processing capabilities—all of which play key roles in on-device AI.

“With heterogenous computing, there are a variety of different engines within the chip to most efficiently process a given task,” says Pat Lawlor, technical marketing staff manager at Qualcomm. “The CPU, GPU, and DSP have different strengths and weaknesses, and they can work together or separately, depending on the AI task. They complement each other, and the AI tasks run on the appropriate engines for high performance at low power.”

The increased processing power built into the chipsets of modern edge devices help them handle the intense processing. For example, the Qualcomm Hexagon 685 DSP, Adreno 630 GPU, and Kryo 385 CPU in the Snapdragon 845 can deliver up-to two to three times faster AI processing over the previous generation. The Hexagon DSP, for example, was originally designed for vector math-intensive workloads like audio processing and continues to be enhanced to address AI workloads, such as accelerating neural networks during AI inference.

What’s Next for Mobile AI?

Mobile AI is a rapidly growing market. With continual advancements in neural networks, DL algorithms, and hardware design, we will see vast improvements in accuracy and speed, plus new, immersive user experiences.

In the broader universe of mobility, 5G wireless networks are also on the horizon. “AI will improve and augment 5G and vice versa,” says Brotman. “5G will enable devices to more freely communicate with each other to share data and share context.” With this development, we will experience a fully connected universe of intelligent edge devices, facilitating more personalized, real-time user experiences.

Our lives today are made richer by the capabilities of our devices, and our future will increasingly be improved by the advancements being made in AI. The convergence of these two powerful trends is already shaping experiences in our personal and business lives.

To learn more about on-device AI, visit qualcomm.com/artificial-intelligence.

 

 

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.