MIT Technology Review Subscribe

Google says it’s too easy for hackers to find new security flaws

Attackers are exploiting the same types of software vulnerabilities over and over again, because companies often miss the forest for the trees.

In December 2018, researchers at Google detected a group of hackers with their sights set on Microsoft’s Internet Explorer. Even though new development was shut down two years earlier, it’s such a common browser that if you can find a way to hack it, you’ve got a potential open door to billions of computers.

The hackers were hunting for, and finding, previously unknown flaws, known as zero-day vulnerabilities.

Advertisement

Soon after they were spotted, the researchers saw one exploit being used in the wild. Microsoft issued a patch and fixed the flaw, sort of. In September 2019, another similar vulnerability was found being exploited by the same hacking group.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

More discoveries in November 2019, January 2020, and April 2020 added up to at least five zero-day vulnerabilities being exploited from the same bug class in short order. Microsoft issued multiple security updates: some failed to actually fix the vulnerability being targeted, while others required only slight changes that required just a line or two to change in the hacker’s code to make the exploit work again.

“Once you understand a single one of those bugs, you could then just change a few lines and continue to have working zero-days.”

This saga is emblematic of a much bigger problem in cybersecurity, according to new research from Maddie Stone, a security researcher at Google: that it’s far too easy for hackers to keep exploiting insidious zero-days because companies are not doing a good job of permanently shutting down flaws and loopholes.

The research by Stone, who is part of a Google security team known as Project Zero,  spotlights multiple examples of this in action, including problems that Google itself has had with its popular Chrome browser. 

“What we saw cuts across the industry: Incomplete patches are making it easier for attackers to exploit users with zero-days,” Stone said on Tuesday at the security conference Enigma. “We’re not requiring attackers to come up with all new bug classes, develop brand new exploitation, look at code that has never been researched before. We’re allowing the reuse of lots of different vulnerabilities that we previously knew about.”

Low hanging fruit

Project Zero operates inside Google as a unique and sometimes controversial team that is dedicated entirely to hunting the enigmatic zero-day flaws. These bugs are coveted by hackers of all stripes, and more highly prized than ever before—not necessarily because they are getting harder to develop, but because, in our hyperconnected world, they’re more powerful.

Over its six-year lifespan, Google’s team has publicly tracked over 150 major zero-day bugs, and in 2020 Stone’s team documented 24 zero-days that were being exploited—a quarter of which were extremely similar to previously disclosed vulnerabilities. Three were incompletely patched, which meant that it took just a few tweaks to the hacker’s code for the attack to continue working. Many such attacks, she says, involve basic mistakes and “low hanging fruit.”

Advertisement

For hackers, “it’s not hard,” Stone said. “Once you understand a single one of those bugs, you could then just change a few lines and continue to have working zero-days.”

Why aren’t they being fixed? Most of the security teams working at software companies have limited time and resources, she suggests— and if their priorities and incentives are flawed, they only check that they’ve fixed the very specific vulnerability in front of them instead of addressing the bigger problems at the root of many vulnerabilities. 

Other researchers confirm that this is a common problem.

“In the worst case, a couple of zero-days that I discovered were an issue of the vendor fixing something on one line of code and, on literally the next line of code, the exact same type of vulnerability was still present and they didn’t bother to fix it,” says John Simpson, a vulnerability researcher at the cybersecurity firm Trend Micro. “We can all talk till we’re blue in the face but if organizations don’t have the right structure to do more than fix the precise bug reported to them, you get such a wide range of patch quality.” 

A big part of changing this comes down to time and money: giving engineers more space to investigate new security vulnerabilities, find the root cause, and fix the deeper issues that often surface in individual vulnerabilities. They can also complete variant analysis, Stone said: looking for the same vulnerability in different places, or other vulnerabilities in the same blocks of code. 

Different fruit altogether

Some are already trying different approaches. Apple, for example, has managed to fix some of the iPhone’s most serious security risks by rooting out vulnerabilities at a deeper level.

In 2019 another Google Project Zero researcher, Natalie Silvanovich, made headlines when she presented critical zero-click, zero-day bugs in Apple’s iMessage. These flaws allowed an attacker to take over a person’s entire phone without ever requiring the victim to do anything—even if you didn’t click a link, your phone could still be controlled by hackers. (In December 2020, new research found a hacking campaign against journalists exploiting another zero-click zero-day attack against iMessage.)

Instead of narrowly approaching the specific vulnerabilities, the company went into the guts of iMessage to address the fundamental, structural problems that hackers were exploiting. Although Apple never said anything about the specific nature of these changes—it just announced a set of improvements with its iOS 14 software update—Project Zero’s Samuel Groß recently closely dissected iOS and iMessage and deduced what had taken place.  

Advertisement

The app is now isolated from the rest of the phone with a feature called BlastDoor, written in a language called Swift which makes it harder for hackers from accessing iMessage’s memory.

Apple also altered the architecture of iOS so that it’s more difficult to access the phone’s shared cache—a signature of some of the most high-profile iPhone hacks in recent years.

Finally, Apple blocked hackers from trying “brute force” attacks over and over in rapid succession. New throttling features mean that exploits that might have once taken minutes can now take hours or days to complete, making them much less enticing for hackers. 

“It’s great to see Apple putting aside the resources for these kinds of large refactorings to improve end users’ security,” Groß wrote. “These changes also highlight the value of offensive security work: not just single bugs were fixed, but instead structural improvements were made based on insights gained from exploit development work.”

The consequences of hacks become greater as we become more and more connected, which means it’s more important than ever for tech companies to invest in and prioritize major cybersecurity problems that give birth to entire families of vulnerabilities and exploits. 

“A piece of advice to their higher ups is invest, invest, invest,” Stone explained. “Give your engineers time to fully investigate the root cause of vulnerabilities and patch that, give them leeway to do variant analysis, reward work in reducing technical debt, focus on systemic fixes.”

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement