Mobile Call Quality Gets a Long-Overdue Upgrade
Wireless companies and a few ambitious startups are racing to make your cell-phone calls better.
About two-thirds of adult Americans own smartphones, and calls are still one of the primary ways we use them to communicate.
While apps have turned smartphones into digital Swiss Army knives that can do everything from tracking your heartbeat to hailing you a cab, the phone part of the smartphone hasn’t gotten much better over the years.
We’re accustomed to mobile phone calls that sound muffled and choppy, making it hard to hear exactly what others are saying and how they’re saying it. It’s one of the reasons why we increasingly use other methods, like texting, e-mail, and instant messaging, to get points across to a group or keep a record of a discussion.
And yet calling is still one of the most common things we do with our phones: a recent weeklong Pew Internet survey that asked people how they’d used their phones in the previous hour found that 92 percent of smartphone owners made voice or video calls at least once over the course of the survey, making it second in popularity only to texting.
In hopes of making these calls better-sounding and much more useful, companies ranging from big wireless carriers to scrappy startups are now working on improvements like network upgrades, better microphones, and apps.
Why, exactly, do cell-phone calls often sound crappy? Jerry Gibson, a professor of electrical and computer engineering at the University of California, Santa Barbara, who studies wireless networks, says that at the network level there are a number of reasons. You simply might not have good signal reception, and even if you do, the base station closest to you is probably considering all kinds of factors—the load on the cell site, the time of day, and network use projections—in order to decide how to allocate bandwidth for your call.
To improve sound quality, wireless carriers have been upgrading their networks to support technologies often referred to as HD voice or wideband audio. While old telephone and cell-phone networks cut out some of the high and low frequencies in voice calls, wideband audio includes a wider range of frequencies to make calls sound better, letting you hear more high and low tones. In the United States, Verizon Wireless, AT&T, and T-Mobile use VoLTE, or voice over LTE, to send audio atop their fast LTE networks. Sprint currently offers advanced voice service over its 3G network, though Ron Marquardt, Sprint’s vice president of technology innovation and architecture, says a move to VoLTE is inevitable.
But most customers don’t know it. Currently, these wideband technologies don’t work together from one network to another, so if you’re a Verizon customer and your friend is an AT&T customer, calls you make to each other won’t use the new technology (AT&T and Verizon have said they’re working on making their VoLTE connections interoperable some time in 2015).
Also, in order for you to hear the difference, your phone has to support the network technology and include the proper audio codec—software that can compress audio so it can be sent across the wireless network and decompress audio that it receives. Since mobile networks were initially built with the limitations of traditional telephone networks in mind, cell phones were made with a narrowband codec. A growing number of phones use wideband codecs that support higher-resolution voice calls—more recent iPhones and Android smartphones from companies like Samsung, LG, and Motorola among them—but many of us still own phones that won’t work.
Beyond codecs, some startups think there are other ways smartphones can be improved so that calls sound a lot better.
Cypher, a startup based in Draper, Utah, is working on algorithms that can better recognize and isolate the sounds of human speech. Many smartphones have multiple microphones, and John Yoon, Cypher’s vice president of product, says the company analyzes the signals received from the microphones in small slices—comparing them with each other and also with an existing set of characteristics that define what human speech sounds like, such as the way the sound of your voice dies out rather than stopping suddenly when you finish speaking. Cypher determines what noise to filter and what speech to let listeners hear, and it uses what Yoon describes as a “musical noise filter” to smooth everything out.
Yoon says Cypher is talking to handset makers about getting its software into smartphones, which he expects to happen this year.
Efforts to improve the phone call aren’t just focusing on sound quality; some app makers are also trying to make the experience of being on a call more useful and fun.
Talko, for instance, focuses on group calls that can be recorded, tagged, and bookmarked (see “An App That Actually Wants You to Talk on Your Phone”). Another app, Yallo, can record calls and search for words and phrases, annotate a call with a note to let the recipient know why you want to talk, and automatically reconnect calls that drop.
“One thing that drove us to it is the realization that every application on our phone has been touched—has been modernized—except for the core of the phone, which is the phone call,” says Tal Elyashiv, Yallo’s founder and CEO.
The company, which is based in Israel, rolled out its first app for Android smartphones in April, and Elyashiv expects an iPhone app to follow within the next two months.
Elyashiv says Yallo aims to bring the kind of personalization we see in text-focused apps to voice calls. While e-mailing and texting can be great for many things, he says, they can’t compare to talking when it comes to things like resolving conflicts or making complex decisions.
“It does not work as well with text as it does with voice,” he says.
AI is here.
Own what happens next at EmTech Digital 2019.