It might be the least spectacular show to ever grace a Las Vegas stage.
Several hundred people packed into a casino ballroom Thursday to see seven powerful, wardrobe-sized computers lit up with blinking LEDs. Invisibly, each machine spent hours trying to attack software running on the other computers on stage, while also defending itself against incoming attacks.
The abstruse contest and its $2 million top prize could lead to the Internet and computers in general becoming much more secure. The Cyber Grand Challenge was staged at the DEF CON hacking conference by the Pentagon’s Defense Advanced Research Projects Agency to spur the invention of software able to automatically spot, test, and fix security flaws. That could revolutionize the fight against problems like criminals exploiting security weaknesses to steal millions of consumer records each year.
The crowd at the Paris casino cheered as software built by security company ForAllSecure was declared the provisional winner late on Thursday. Final confirmation of the result will come Friday morning after DARPA has analyzed data logs from the contest.
David Brumley, the Carnegie Mellon University professor who cofounded ForAllSecure, said that he believed the result proved it was feasible for software to autonomously solve some security problems. “We look at this as the first step,” he said.
ForAllSecure and the other six teams each had to develop a “bot” to run on a high-powered, water-cooled computer and look after a collection of almost one hundred programs created for the contest. Those programs were loosely modeled on packages that might be found on a Web server and intentionally designed with security flaws.
Each team’s bot earned points for fixing flaws in programs in its care, keeping them running, and probing the programs of other teams to identify unfixed vulnerabilities. Bots didn’t get to see the programs they had to look after before the contest.
Many security companies, security researchers and criminal hackers use tools that automate the process of analyzing software for potential vulnerabilities. But the work of crafting and deploying patches for security flaws falls solely on humans, said Mike Walker, the DARPA program manager who led work on the contest. “The reason it’s a ‘grand challenge’ is that none of those capabilities has been automated,” he said.
Walker acknowledged that some of the automated capabilities on display could eventually be used to attack computer networks as well as defend them. But he claimed that the fact DARPA held the contest in the open, and is publishing all the data from it will tilt the technology towards offering protection. “All computer security tools are dual use," he said. "The difference between offensive and defensive use is often openness."
Peter Lee, who leads Microsoft’s research efforts and attended Thursday’s event, said that being able to automatically generate fixes for security flaws would help software companies make their products much safer. “It would be a very big deal for something like Windows or Office,” he said.
Tim Bryant, from defense contractor Raytheon, said he expected to use technology developed for the company's Rubeus bot with customers of the company’s security business. Complex systems such as power networks that rely on many kinds of hardware, software and computers could benefit from better ways to spot or remediate vulnerabilities, he said. “I think we’ll be talking to critical infrastructure companies, this could really help them,” said Bryant.
The fact that six of the dozens of flaws the bots had to deal with were based on real vulnerabilities that have caused havoc online may also provide evidence of the technology's practical potential.
The teams collectively fixed five of the classic flaws, which included the Morris Worm that crippled the Internet in 1988 and the Heartbleed bug revealed in 2014 that could break the encryption protecting online transactions. The Jima bot, from the University of Idaho, even found and fixed a flaw that wasn’t intentionally included in the contest programs but had occurred by accident.
However, the contest also showed some limitations of security bots. The winner, Mayhem, experienced a crash that rendered it unable to generate new patches or probe other teams for a time. One fix deployed by Raytheon’s Rubeus bot worked but was too complex, and sucked away computing power from other programs running on the same computer.
The winning bot will compete again on Friday in DEF CON’s annual competition for human hackers, known as a Capture the Flag, or CTF. A hacker known as Gynophage, one of the CTF organizers, said from what he saw from the bots, the winner shouldn't be a pushover. “It could absolutely be in the top five, but ultimately we think it will be overcome by the adaptability of humans,” he said.
Walker of DARPA believes the bot has a chance of leading for a short time at the start of Friday's contest, since software can act faster than a person. Longer term, he envisages bots that fix and protect software working alongside humans, not replacing them altogether.
“We see the future of network defense more as a partnership,” said Walker. DARPA is considering holding a second hacking contest in which people and bots work together.
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
What to know about this autumn’s covid vaccines
New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.
Human-plus-AI solutions mitigate security threats
With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure
Next slide, please: A brief history of the corporate presentation
From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.