Skip to Content

U.S. Military, Businesses Seek Better Defenses on the Inside

Research projects at the Pentagon highlight the need to prevent data theft that happens within an organization’s walls.

For most of the history of the Internet, companies and government agencies have split networks into two categories: internal, trusted systems and external, untrusted ones. The most common approach to security has been to erect a wall that treats data and communications as potentially dangerous if they come from outside and safe if they come from within.

Yet some of the most serious breaches, such as the massive handover of U.S. State Department cables to WikiLeaks late last year, come from corporate and government insiders. Even if they mean no harm, insiders can present security risks: several major data breaches have occurred after attackers tricked employees into downloading malicious software that took hold inside the organization’s firewall.

“In the early 2000s, you would see a lot of organizations focus on outsiders exclusively,” says Joji Montelibano, who leads the insider-threat technical team at the Software Engineering Institute’s CERT program at Carnegie Mellon University. “With the prevalence of information technology everywhere now, the ways an insider can harm an organization have increased dramatically.”

In hopes of counteracting the trend, the Defense Advanced Research Projects Agency (DARPA)—the research arm of the U.S. military—has called for research that would improve the government’s ability to identify threats from within. DARPA is taking a two-pronged approach: last August, an agency project named Cyber Insider Threat (CINDER) called for proposals for better systems to detect attackers who have already compromised a network. Two months later, DARPA launched Anomaly Detection at Multiple Scales (ADAMS), to detect insiders just before or after they go rogue.

The proposed ADAMS technology will likely model typical user behavior and alert managers when a user is acting off-profile. Such a system, for example, could have caught Bradley Manning, the U.S. intelligence analyst who is alleged to have leaked the diplomatic cables, by warning officials that Manning had suddenly accessed thousands of cables from his computer.

“If I’m trying to get information out of my company, I’m probably going to start at the simplest level and work my way up—I would try to e-mail it to myself, I would try to post it to a website, or upload the file to a peer-to-peer network,” says Daniel Guido, a consultant with iSec Partners, who frequently tests firms’ security to identify potential weaknesses. “They are going to approach exfiltrating information outside the company in a very particular way, and if you think like they do, you will be much more effective” as a defender.

The problem is difficult, though, if the systems attempt to take in many variables, says Malek ben Salem, a graduate student and computer-science researcher at Columbia University. She has been trying to model search behavior in order to detect when an attacker is going beyond the normal scope of his job or impersonating someone with legitimate access. Because attackers might not know a file system or other aspects of corporate network as well as a legitimate employee does, they tend to search more extensively. In experiments, Ben Salem says her model has detected 100 percent of masqueraders with a rate of false positives of only 1 percent.

The CINDER project looks for activity in a system that suggests of an attack launched from the inside. For instance, a worm like Stuxnet, which is believed to have damaged Iran’s nuclear program, could be detected by looking for the changes it has made to system files and network disks.

“CINDER will attempt to address some of the flaws in current detection systems by modeling the adversary mission—not by attempting to monitor a person or their particular traits—and by beginning with the assumption that a given system has already been compromised,” Peiter “Mudge” Zatko, the manager in charge of the program at DARPA, said when the project was announced.

Increasingly, companies that sell security products are adding features that may help detect insider attacks. For example, firewalls and other security systems have been fortified with software that scans for encrypted e-mail. Some security companies advocate deploying decoy files that no employee should ever access, and alerting managers when they are accessed. Coupling decoy files with current research into modeling legitimate user behavior could detect a wide variety of attacks, says Columbia’s Ben Salem.

However, insider attacks cannot be thwarted by just creating a better network appliance, says SEI CERT’s Montelibano. Better policies and security measures are important as well, such as allowing only approved applications to be run inside a network and limiting e-mail attachments.

“The big finding of our research is that insider threats are not just a technical problem,” he says. “What we still see is organizations throwing technology at the problem. But our research reveals that by and large, insiders’ technical activity is preceded by observable behavioral activity.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

It’s time to retire the term “user”

The proliferation of AI means we need a new word.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.