What Has Technology Fixed Since 9/11?
Not much, as it turns out. And it’s helpless against the widespread threat of bureaucracy.
As we hit the 10-year anniversary of the September 11 attacks, we’re confronted with a sobering “report card” from the heads of the original September 11 Commission. In spite of a doubling of the intelligence budget since 2001 to $80 billion, the creation or reorganizing of some 263 government organizations, and the formation of the $50 billion Department of Homeland Security, the government has largely fallen short, the new report notes. The report states that while some progress has been made, “some major September 11 Commission recommendations remain unfulfilled, leaving the U.S. not as safe as we could or should be.”
Technology has, in some ways, been a particular disappointment with regard to security, in that no real breakthrough technologies spurred by massive government R&D investments have emerged. “We’ve mostly seen the deployment of off-the-shelf equipment,” says James Jay Carafano, director of the Center for Foreign Policy Studies at the Heritage Foundation. “Improved intelligence and counterterrorism investigation have been more important than new technologies.”
The ability to detect explosives and weapons at airports is one area the report singles out as having moved too slowly. It points out that the technology still “lacks reliability and lags in its capability to automatically identify concealed weapons and explosives.” The situation has been mitigated to some extent by the deployment of about 500 full-body scanners at some 78 airports in the U.S., with another 275 slated to come online within a year. But new standards for the sensitivity of these and other explosive-detection devices set in 2010 still haven’t been met, and many airport systems still operate at decade-old performance levels.
What’s more, the new scanners created a sharp privacy debate over the way they reveal body parts, while the so-called “puffer machines”—used to sense traces of explosive chemicals—proved embarrassingly ineffective because they were easily gummed up by contaminants in the environment.
Those sorts of controversies and failures, notes Rob Strayer, director of the National Security Preparedness Group at the Bipartisan Policy Center, could make it harder to fund and deploy a new generation of systems. What’s more, says Strayer, the situation was easily avoidable, emerging out of the defense community’s failure to recognize that, unlike technology provided to the armed forces, technology that directly impacts the public shouldn’t be rolled out until it’s been proven to be effective and tolerable. “The Department of Homeland Security is too quick to deploy devices that haven’t been fully field-tested,” he says. “They’re under a lot of pressure to solve these problems quickly.”
Results seem to be better in the area of “data mining,” or sifting through vast databases of information to pull out insights about who might be planning an attack. “There’s been a lot of progress over the past 10 years in mining large volumes of social media and other data,” says Rohini Srihari, a computer science researcher at the State University of New York at Buffalo, and founder of database-mining-technology firm Janya, which has been involved in defense work. There are now supercomputers crunching away at data posted to Facebook, Twitter, and countless websites and blogs, in multiple languages, all to find links between people, places, and events that could represent security threats. Challenges still remain in content analysis, notes Srihari—that is, in understanding context, metaphors, local jargon, and other complexities of language that can trip up a computer. But headway is being made here, as it is in understanding the emotional content of language. Such an understanding can allow analysts to zero in on posts from people who may be expressing intense anger while discussing sensitive potential targets such as airports or other crowded public places.
The biggest challenge to data mining may be finding the data that isn’t publicly posted. “The sort of people who carry out a September 11 attack aren’t posting about it on Twitter,” says Srihari. What’s more, notes security researcher Richard Bloom at Embry-Riddle Aeronautical University, false positive and false negative results in anti-terror data mining can be highly damaging. “Even if we’re missing less than a percent, we’re letting bad guys get away,” he says. “And falsely tagging people as bad guys has huge implications for society.”
Smarter Video Surveillance
Another hope for technology is that of hooking up video cameras to computers capable of recognizing suspicious behavior in airports and elsewhere. But such systems, while under development, are still far too inaccurate to add much security, notes Bloom. He adds that the technology still has to take a back seat to the far more costly, but far more effective, technique pioneered by the Israelis, and now being adopted at U.S. airports, of briefly but rigorously questioning passengers with a skilled eye to detecting behavioral anomalies that merit more intense scrutiny. “It’s not as simple as spotting people who seem stressed out,” says Bloom. “Most people who are stressed out aren’t terrorists.”
Our borders remain far too porous as well, notes the report. Networks of sensors deployed at borders and coastlines have so far fallen short of hopes for them, says Carafano. “We’re still working on getting sensors that are sensitive enough and on techniques for interpreting the data from them,” he notes. We remain stuck with a particular vulnerability to bioweapons, he adds, where technology has done much more to add to the threat than help us guard against it. “Every time we come up with a way to deliver a medicine more effectively, we’re creating the potential for engineering more deadly bioweapons,” he says. “We really need more cutting-edge biotech countermeasures.”
At least we can’t blame technology for the continued failure—noted in the report—to link up the communications networks used by different public-safety agencies in different localities. That’s strictly a planning problem, says Strayer. “It’s just a matter of getting agreements between senior public safety officials,” he says. “Until now, they’ve been putting in systems that tend to meet their own needs, instead of worrying about how to integrate them with everyone else’s.”
There’s no technological solution yet for bureaucracy and self-interest.
Be the leader your company needs. Implement ethical AI.
Join us at EmTech Digital 2019.