Look out, human hackers. Pentagon research agency DARPA says people are too slow at finding and fixing security bugs and wants to see smart software take over the task.
The agency released details today of a contest that will put that idea to the test at the annual DEF CON hacking conference in Las Vegas next month. Seven teams from academia and industry will pit high-powered computers provided by the agency against one another. Each team’s system must run a suite of software developed by DARPA for the event. Contestants win points by looking for and triggering bugs in software run by competitors while defending their own software.
Mike Walker, the DARPA program manager leading the Cyber Grand Challenge project, claims the approach could make the world safer.
“The comprehension and reaction to unknown flaws is entirely manual today,” he said in a briefing Wednesday. “We want to build autonomous systems that can arrive at their own insights about flaws [and] make their own decisions about when to release a patch.”
When malicious hackers find a new flaw in a piece of commonly used software, they can typically exploit it for a year before it is fixed, Walker said. “We want to bring that response down to minutes or seconds. Hopefully we ignite a revolution where we eventually have a machine that can compete with top experts.”
The seven competing teams were selected last summer after a simpler, preliminary contest. Each team was given $750,000 and access to a high-performance computer with 1,000 processor cores and 16 terabytes of memory.
In next month’s final contest, teams must sit back and watch as the software they have developed competes against that of the other contestants without any human intervention. The winning team will take home $2 million and be invited to compete against human hackers in DEF CON’s annual capture-the-flag contest.
Walker doesn’t expect the automated hacker to do very well against humans, but the software doesn’t have to be able to hold its own in a matchup with elite hackers to be useful. Anything that helps the U.S. military find flaws in its software faster would benefit national security, he said.
He played down suggestions that technology developed for the Cyber Grand Challenge could be used maliciously in the real world. Not only is it unclear whether techniques developed for the contest would work on real software, but DARPA is committed to encouraging wide use of such software, said Walker. Teams are required to release all their code as open source.
“If technology is democratized, then we don’t believe that nefarious misuse will be feasible, because the bugs that will be found will already have been patched,” he said.
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
The Biggest Questions: What is death?
New neuroscience is challenging our understanding of the dying process—bringing opportunities for the living.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
How to fix the internet
If we want online discourse to improve, we need to move beyond the big platforms.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.