If we’ve learned anything from covid-19, it’s the extent to which our lives are enmeshed with those of the people around us. We interact constantly, spreading our germs and picking up theirs. That’s why exposure notifications—using your phone to tell you if you’ve crossed paths with an infected person—seemed so promising.
Why it matters:
Covid exposure notifications didn’t live up to the hype. But there’s still a lot to learn from their rollout.
Technology offered a way to automate time-honored contact tracing efforts in which public health investigators ask patients to retrace their footsteps in order to deduce where they got infected. Did they interact with a clerk at the store, a classroom of children, a thousand passengers on a cruise ship? Apps meant disease sleuths wouldn’t have to rely on an individual’s memory, and they could ease strain on the authorities monitoring an outbreak.
That idea sparked a remarkable wave of development and cooperation. Some programmers had systems up and running in weeks, open-sourcing their code and sharing it freely so that countries as far apart as Canada and Mongolia could essentially use the same system. Meanwhile Apple and Google, rivals in almost every usual respect, collaborated on a system that worked on smartphones and kept health data anonymous and private. By January, MIT Technology Review was tracking 77 exposure notification apps being used by governments around the world.
Like many things meant to slow the pandemic, however, digital contact tracing hasn’t yielded the lifesaving results we needed. In fact, it barely made a dent. Why?
A challenge too great
In many countries, limiting the spread of covid simply seemed too hard a problem for contact tracing to solve. Slow action, mixed messages, mismanagement, and neglect all played a part: despite lockdowns, travel restrictions, and mask mandates, the virus kept infecting people. It didn’t matter whether you were riding on a bus, gathering for dinner, or toasting at the White House.
Exposure notifications also suffered from mistrust and a lack of clear messaging. Some people didn’t believe their own government’s warnings about the virus. Others were all too conscious of Silicon Valley’s checkered reputation when it came to privacy. At a time when people’s relationship with technology was so fraught already, companies that weren’t even involved in exposure notifications, such as Facebook, may have indirectly deterred their adoption.
What if this had happened when everyone was happier with tech companies? “I think about that all the time,” says Julie Samuels, who helped lead the team that built New York state’s app. “The pendulum swung the other way.”
Privacy wasn’t just an abstract concern. For groups, like Black Americans, with good reasons to distrust the authorities—reasons based on personal experiences or historical harms—handing information over to the government for contact tracing could be a nonstarter.
A bigger push to earn trust now seems to have been a crucial missing element, since notifications become more effective if a lot of people opt in. Higher adoption rates required a foundation of trust to be built first, and the strength or weakness of that foundation affects us all, not just those who opt out.
“Viruses are not that selective,” says Stephanie Mayfield, who directs the US covid response for the nonprofit Resolve to Save Lives. “If we don’t look out and take care of each other, we all pay a price.”
Even when privacy protections were put in the foreground, as with Apple and Google’s system, that created other problems. The system isn’t tied to your identity and doesn’t track your location; instead, it uses Bluetooth to anonymously ping nearby phones running the same app. But with this technique, turning a positive result into an alert is so complex that public health experts weren’t able to learn much about where clusters were forming or how the disease was spreading.
Privacy concerns aside, there were other practical questions about exposure notifications. Did the people at highest risk own the smartphones required to run the apps? How would the services operate across state or international borders? And was there enough testing in the first place?
Nobody building these systems thought they would be a silver bullet, but the struggle was a stark reminder of how technology can fail to solve a problem even when its creators have the best intentions.
Contact tracing works best as part of what experts sometimes call the Swiss cheese model, which involves layering several strategies. One method may have holes, but many combined can form a solid block.
Do this right, and “you could almost stop a pandemic in its tracks,” says Rajeev Venkayya, who was part of the US team that helped design the George W. Bush administration’s plan to deal with future pandemics.
For covid, the appropriate layers would include comprehensive testing, effective contact tracing, and social distancing—but with few of those layers in place, the virus ran wild. And once the spread is rampant, contact tracing simply isn’t enough.
The promise ahead
Despite its shortcomings, digital contact tracing may still have a future. The arrival of multiple vaccines gives hope that case numbers will drop to manageable levels. At that point, Venkayya says, “having all the tools that we can at our disposal—including robust testing and tracing—will be really important. You are just trying to keep up and to limit the damage that’s being done.”
In the US, as the Biden administration gets up to speed, federal or national solutions (like pushing for nationwide use of contact tracing apps) may be part of the answer—along with monitoring tools like Bluetooth beacons, tracking bracelets, and QR codes that you scan to enter a cafe or workplace.
But the most important takeaways from our global experiment with exposure notifications may be less about the technology and more about how to implement it. The glitchy rollout has made it clear that introducing innovations—for this pandemic or the next—will require us to build trust, increase access and equity, and consider technology’s place in complex systems.
Progress, of course, is about looking ahead. But as contact tracing reminds us, it’s just as important to retrace our steps.
What’s next for AI regulation in 2024?
The coming year is going to see the first sweeping AI laws enter into force, with global efforts to hold tech companies accountable.
Three technology trends shaping 2024’s elections
The biggest story of this year will be elections in the US and all around the globe
Four lessons from 2023 that tell us where AI regulation is going
What we should expect in the coming 12 months in AI policy
The FTC’s unprecedented move against data brokers, explained
It could signal more aggressive action from policy makers to curb the corrosive effects that data brokers have on personal privacy.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.