Defending Laptops from Zombie Attacks
Intel is developing more-accurate ways to tell when a machine has been infected.
Researchers at Intel have developed laptop-based security software that adjusts to the way an individual uses the Internet, providing a more dynamic and personalized approach to detecting malicious activity. The software is aimed at corporations that pass out laptops and mobile devices to employees, since IT departments usually install the same one-size-fits-all security software on all their hardware. The homogenous security approach is quick and easy, says Nina Taft, a researcher at Intel Research Berkeley, but because standard software doesn’t take into account different people’s patterns of computer use, it can produce false positives and entirely miss some attacks.
“One reason security breaches are so rampant is that most of our machines look the same,” says Taft. They have the same operating systems, same applications, same protocols, and same Internet traffic thresholds in the security settings, she says. “When a hacker breaks into one machine, he can break into all of them … We’re trying to inject diversity into computers.”
The type of security software deployed by most IT departments has a component that looks at Internet traffic coming in and out of a computer. When traffic exceeds a preset threshold, the software suggests that the computer is infected. It might, for instance, have been recruited as part of a “botnet,” in which it is remotely controlled by a malicious computer that instructs it to communicate with other infected machines. (Much spam is sent from botnets.) Some people, however, habitually send out large amounts of information, which can trigger the security alarm, while others who stay well below the threshold can unknowingly harbor malicious activity.
As part of a project called Proteus, Intel researchers have developed several algorithms that can make more nuanced judgments. One algorithm uses standard statistical and machine-learning techniques to monitor a person’s Internet use and create individualized traffic thresholds. A second algorithm gauges how people’s Internet use changes throughout the day. Taft has found that people’s habits are significantly different when they use company laptops to log in to networks other than the company’s. “Ninety percent of people have quite a different behavior when they’re at work than when they’re at home,” she says. Tying different traffic thresholds to different location profiles could improve security software’s ability to detect compromised machines.
“I think the basic takeaway is, if you can be really precise in capturing user behavior, you can make the work of the attackers much harder,” Taft says. In order to successfully infect a machine that maintained a number of different usage profiles, a malicious hacker would need to know when each applied and what its traffic threshold was. “You limit the range of possibilities they have for succeeding,” Taft says.
A third set of Proteus algorithms uses the same behavioral principles to examine communication between laptops and other machines on the Internet. Botnets are coordinated by a central host with which each infected machine communicates. One way to detect botnets is to eavesdrop on these communications. “We developed algorithms that check for this calling-home activity with some regularity,” Taft says. Infected machines usually call home at 6-, 12-, or 24-hour intervals. Taft’s team has shown that by listening for periodic calls to the same location, the software can determine whether a machine has been recruited by any of three different botnets, including Storm, a pervasive network that controls hundreds of thousands, and possibly millions, of machines worldwide.
Taft says that the idea of using behavioral data to make security software more accurate is not new, but that for the most part its application has been limited to routers that monitor network activity. Proteus is the first such system designed for laptops.
Taft isn’t yet sure how the final version of Proteus will affect the performance of the device it runs on. Initially, when the software is just monitoring behavior, it will run constantly in the background, she says. After that, it has a much lower level of activity. One possibility might be to hardwire Proteus into a computer’s circuitry. “Intel is interested in getting as much [security] into hardware as possible,” Taft says. “It’s a good use of [processing] cores, and when things are in hardware, they’re harder to tamper with.”
Nick Feamster, a professor of computer science at the Georgia Institute of Technology, says that the behavioral approach to security hasn’t been applied to laptops in the past because there wasn’t an automated way of developing personalized rules. But behavioral botnet protection is “very well suited for machine learning,” he says.
So far, the researchers have tested the system with 350 people and are in the middle of discussions with Intel’s IT department to do a wider deployment. In the end, however, Proteus won’t be enough to keep all computers safe all the time, according to Taft. “There are so many different ways to break in,” she says. “One will need many security checks on a computer.”
Hear more about machine learning at EmTech MIT.
September 11-14, 2018
MIT Media Lab