Data breaches like the one that hit the Pentagon’s e-mail system this week often start when one person makes a simple mistake like opening a phishing message. But the computer security industry is mostly built on tools that probe, patch, or scrutinize software rather than human errors.
Laura Bell, CEO of SafeStack, a security company in Auckland, New Zealand, thinks she has a way to address that discrepancy. She’s developing a kind of security scanner for people, in the form of software called Ava. It sends people targeted e-mails or social-media messages to see how good they are at resisting the scams that lead to dangerous breaches.
“If I’m the attacker, I’m going after the people,” says Bell, who presented Ava at the Black Hat computer security conference Thursday. “People are the path of least resistance, and we have to do something about it.”
Ava takes in data from corporate IT systems to map out the permissions that employees have and assess how frequently they communicate with each other. It also looks for employees’ social-media profiles and the connections between them, which can highlight key relationships that might be valuable to an attacker.
Ava can then be used to send phishing-style messages to employees to test how they respond. There might be a message from a senior executive asking a junior employee for a password, for example, or one from a distant coworker dropping the name of a friend and asking for a work document to be shared via Facebook.
The security industry does have some established ways to try to rein in what are called social-engineering attacks. Security training has become standard at many large organizations, and some companies occasionally stage phishing attacks to drive home the risks of fake e-mail. But Bell says the continual stream of breaches caused by human slip-ups shows that education doesn’t work. Meanwhile, companies that perform phishing tests are rare, and they are generally one-off, manual exercises, she says.
Ava is intended to let organizations probe communication patterns and key relationships continually, says Bell—resulting in something more like an automated defense system such as a firewall. That could make it possible to track changes in a company’s level of human vulnerability over time, perhaps uncovering relationships to project deadlines or training events, she says.
However, Ava is still a work in progress. Bell has tested the software with a few small public- and private-sector organizations in New Zealand, and the team working on the software has grown. Now a newly formed ethics and privacy board is considering the legal and privacy issues that surround intentionally tricking people.
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
The Biggest Questions: What is death?
New neuroscience is challenging our understanding of the dying process—bringing opportunities for the living.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
How to fix the internet
If we want online discourse to improve, we need to move beyond the big platforms.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.