Skip to Content
Uncategorized

World Without Walls

When everything that can be recorded is recorded, our means of protecting privacy must fundamentally change.
October 25, 2011

People have long worried that technology is destroying privacy. Today, the lament focuses on Facebook; but as far back as 1890 William Brandeis, a future Supreme Court justice, and his associate Samuel Warren were decrying the unprecedented assault against privacy by the new media of their day: tabloid newspapers and cheap photography. The two Boston lawyers were defending what they called a “principle as old as the common law”; their article, “The Right to Privacy,” was probably the most influential law-review article ever written. 1

But Brandeis and Warren had it backward. When they laid the foundations of modern privacy law, they were inventing something strikingly new: a generalized right “to be let alone” that was unmentioned in the Constitution or the Bill of Rights. In the 18th century, publicity was such an exception to the norm that privacy didn’t need to be named, much less legally protected. Along with the miles of fields and forest that kept neighbors out of one another’s business, a very simple artifact—walls—effectively frustrated snooping eyes. Keeping secrets was sufficiently easy that Americans felt they controlled the extent to which their activities were private. With the establishment of a national postal service in the late 18th century, that began to change. As an increasing amount of private information began to circulate routinely through a public infrastructure rather than being transmitted by private couriers, traditional privacy technologies began to fail. It was no longer enough to seal letters closed with wax and an individualized seal, for example; once a letter or package was in the mail, a person could do almost nothing to ensure that no one but its intended recipient would open it. So in 1878 the Supreme Court ruled, for the first time, that the Fourth Amendment (which guards against unreasonable search and seizure) protects the contents of a letter as if it were still in the sender’s home, establishing a legal right to privacy in correspondence. When Brandeis and Warren proposed a new tort—the invasion of privacy—they were seeking to further extend legal rights to protect a person’s sense of uniqueness, independence, integrity, and dignity from the depredations of emerging technologies. 2

We see this pattern repeating itself again and again, with technologies ranging from the postal service to e-mail. As old techniques for protecting proprietary information lag behind new technologies of information circulation, the law seeks to restore the status quo ante, making difficult and expensive what has become easy and cheap—in effect, seeking to do the work that was once done, silently, by walls, closed doors, or sealing wax.

This legal model is inadequate for an age of networked information. Brandeis and Warren were concerned with the kind of privacy that could be afforded by walls: even where no actual walls protected activities from being seen or heard, the idea of walls informed the legal concept of a reasonable expectation of privacy. It still does. Today police cannot use thermal imaging to penetrate the sanctity of the bedroom without a warrant, because the law protects the thing that’s notionally closed away, making opaque the barriers through which modern technology can see. But this is a deficiency of contemporary privacy law: only information that is walled off is protected. Opening a closed or locked door, listening through a wall, breaking the seal of a letter, and tapping a private phone all require a warrant, for example. But contemporary threats to privacy increasingly come from a kind of information flow for which the paradigm of walls is not merely insufficient but beside the point.

In the last 50 years, the sheer density of the information environment has reached and surpassed the point at which privacy might be maintained by walls. And a legal system built on a presumption of information scarcity has no chance at protecting privacy when personal information is ubiquitous. We shouldn’t worry about particular technologies of broadcast or snooping—for instance, the way Facebook trumpets our personal information or deep packet inspection allows governments to trawl through oceans of Internet data. The most important change is not the particular technologies but, rather, the increase in the number of pathways through which information flows.

In the 1980s, Roger Clarke coined the term “dataveillance” to describe the kind of surveillance that becomes possible as we move from a world in which personal information is rare and expensive to one overflowing with data. The idea of data mining, for example, makes no sense in an information-scarce environment: the proverbial tree that falls when no one is watching does not get recorded, and it never becomes data. Today, more and more personal information is recorded, and collecting, standardizing, processing, and monetizing vast pools of it has become big business. The private data broker Acxiom, for example, has an average of 1,500 items of data on the 96 percent of Americans currently in its databases.3 Along with networked computing, a decline in data-storage costs means it is easier to record everything that can be recorded—and indefinitely store and circulate everything—than to sort through it to determine what is and what isn’t worth keeping.4 The baseline of our information environment has become a tendency toward total availability and total recall.

When we talk about privacy and surveillance, it is impossible to avoid mentioning 1984, George Orwell’s dystopian account of a world without walls, where television watches you and microphones record every sound above a low whisper. But Orwell said nothing about dataveillance. And while the Fourth Amendment guarantees protection from the kinds of governmental invasions that tend to concern Americans the most (reading our mail, searching our homes), when we think about the problem this way we tend to overlook the kinds of knowledge discovery that don’t require anyone to break into anything.

Data mining as criminal investigation is a good example. Investigating a suspect’s known associates is an ancient tactic in policing, but it costs money, time, and effort, and it’s legally complicated; investigations tend to be constrained by a high threshold of initial suspicion. But as the amount of widely available data rises, another kind of search becomes possible. Instead of starting from a subject of suspicion and placing that person within a map tracing patterns of behavior and networks of associates, it becomes feasible to begin with the whole map and derive the subjects of suspicion from the patterns one finds. Pattern-­based data mining, in other words, works in reverse from a subject-based search: instead of starting from known or strongly suspected criminal associations, the data miner attempts to divine individuals who match a data profile, drawing them out of a sea of dots like the pattern in a color-blindness test. Dataveillance draws powerful inferences about people and their associates from deep and rich—and often publicly available—records of otherwise routine behavior.5 Automated systems monitor the environment to match the profiles of particular users to pattern signatures associated with criminal behavior, using algorithms to track and analyze anomalies or deviations from what someone, somewhere, has deemed normal.

Existing privacy protections are largely irrelevant to this kind of surveillance. The Fourth Amendment protects only what is said in a conversation. But much the way pen registers and tap-and-trace devices (which record which numbers you dial, who calls you, and how long you talk) do not trigger the same Fourth Amendment privacy protections that a wiretap would, telecommunications companies are not banned from amassing and selling vast databases of call data. The Supreme Court has ruled that once information about call “attributes” passes into the possession of a third-party carrier like a telecommunications company, it effectively becomes the property of that third party, which may collect, store, and circulate it as it pleases.

Before it was easy to store, index, and access such information, the privacy implications were minimal. But as the government’s network analysts turn their attention to the databases of telecommunications information collected by companies like Acxiom, or simply provided by telecommunications companies themselves,6 the basic process of criminal investigation is being turned on its head. It becomes more practical—and legally less complicated—to fish in an ocean of easily available information about everybody than to target specific suspicious individuals.

Law enforcement has long been interested in “intelligence-led policing,”7 but pattern-based data mining and predictive network analysis gained considerable momentum after the 9/11 Commission blamed intelligence-sharing breakdowns and a failure to “connect the dots” for the attacks of September 11, 2001. Intelligence agencies had been keeping secrets from each other. The more that was known about Osama bin Laden and the threat posed by al Qaeda, for instance, the higher the level of secrecy classification rose on that information, and this had the effect, ironically, of preventing it from being shared within the intelligence services.8 As the United States has reorganized, expanded, and integrated its intelligence, law-enforcement, counterterrorism, and homeland-security agencies in the last 10 years, these agencies have been tasked with predicting and preventing terrorism. And so, as federal and state agencies have begun to develop the tools and processes that would make predictive policing possible, they have sought not only to take advantage of ongoing trends in the broader information environment but also to shape and expand that environment.

The United States has not built a centralized spying agency. In 2002, Congress ordered the creation of an “effective all-source terrorism information fusion center” that would “implement and fully utilize data mining and other advanced analytical tools.”9 The same year, the Department of Defense’s Advanced Research Projects Agency (DARPA) announced a “Total Information Awareness” research program that would develop new technologies for data collection, data mining, and privacy protection. The TIA program was to have been managed by a centralized agency called the Information Awareness Office, but a bipartisan backlash led to the IAO’s termination in 2003. Along with the scope of its ambition, its logo was a public-relations disaster: the all-seeing eye and pyramid from the dollar bill, along with the Latin phrase Scientia Est Potentia (“Information Is Power”).

Yet the goals of TIA, and even some of its research projects, were not abandoned when the Information Awareness Office10 was shuttered. As Siobhan Gorman reported in the Wall Street Journal, many of TIA’s component parts were quietly reconstituted elsewhere in DARPA or in the secretive National Security Agency.11 Steve Aftergood of the Federation of American Scientists called the defunding and reassembly of some of TIA’s projects a “shell game,” while the American Civil Liberties Union has questioned whether the government was attempting to replace “an unpopular Big Brother initiative with a lot of Little Brothers.”12

At the same time, an initiative called the Information Sharing Environment (ISE) was undertaken to create a decentralized network for information aggregation and distribution. In contrast to the Information Awareness Office, the Information Sharing Environment is less an organization than an interagency “approach.”13 As the ISE’s most recent report to Congress put it, it is “analogous to the interstate highway system”: “The ISE represents the structure and ‘rules of the road’—including commonly understood road signs, traffic lights, and speed limits—that allow information traffic to move securely, smoothly, and predictably … If built properly, everyone can use the roads.”

The office of the program manager of the Information Sharing Environment is small and unimposing, with only about a dozen full-time employees; most of the work is done by private contractors, whose number is much larger. But as Kshemendra Paul, the current program manager, emphasized last year, the ISE mandate is not to “pour the concrete” itself, but to coördinate the “data-centric” infrastructure through which information is shared among mission partners, a category that includes federal, state, and local agencies, the private sector, and international allies.14

Most of what the Information Sharing Environment does is in the realm of standardization: it works to develop, coördinate, and expand across all levels of government the indexes and mechanisms of interoperability that allow agencies to share with one another. For example, the Nationwide Suspicious Activity Reporting Initiative expanded and standardized a project, originally developed by the Los Angeles Police Department, to report, tag, and circulate “observed behavior that may be indicative of intelligence gathering, or preoperational planning related to terrorism, criminal, or other illicit intention.” As a result, a Suspicious Activity Report filed by one agency can now be shared (and be mutually comprehensible and indexable) across all levels of government. This is an example of the “data-centric” approach, in which a variety of “Rosetta Stones,” as Kshemendra Paul calls interoperability standards, allow data to move as freely as possible among authorized agencies. At the same time, to ensure that information is shared only with those tasked to use it, a working group within the Information Sharing Environment has been developing a program called “Simplified Sign-On,” which would allow users of one sensitive law-enforcement database to access all others within a single government-wide system.

Within the Information Sharing Environment, a new kind of information-sharing site has evolved since 2006: the “fusion center.” A fusion center is a node of intelligence dissemination, where representatives of federal, state, and local government gather to share and “fuse” intelligence both with one another and with representatives of private companies and foreign governments.15 Each fusion center is different in structure and in scope. So far, there are 7216 of them, the majority under the authority of state governments. The underlying concept is both expansive and simple: to facilitate information sharing both by “co-­locating” representatives of partnering agencies under one roof and by using those partnerships to connect and circulate information of potential interest, such as Suspicious Activity Reports.

To accomplish their goals, fusion centers take what is called an “all crimes, all hazards” approach, which not only is “flexible enough for use in all emergencies” but is by design as open-ended as possible.17 As John Pistole, a former deputy director of the Federal Bureau of Investigation, put it, “We never know when something that seems typical may be connected to something treacherous.” So fusion centers put a premium on total data sharing, a mind-set in which more is always better.18 Rather than sorting, processing, and classifying information, they connect existing repositories of information almost indiscriminately and with a single-minded institutional logic. Meanwhile, terms like “threat” and “information” come to be defined more and more loosely. As a 2007 Manhattan Institute white paper noted, “Consistent with the all-hazards approach of many fusion centers, the term ‘threat’ indicates any natural or man-made occurrence that has the possibility of negatively affecting the citizenry, property, or government functions of a given jurisdiction.” 19

The 2010 Fusion Center Privacy Policy Template explicitly states that information is to be kept only for appropriate uses, but the systemic logic of fusion points in the opposite direction. Fusion centers are specifically designed, after all, to circumvent restrictions on information sharing—to replace a system of “need to know” with a system of “need to share,” as the 9/11 Commission put it. Moreover, since pattern-based network analysis starts from the big picture, it makes no sense to limit the available data to what is already deemed suspicious. Instead, fusion centers work to give all partnering agencies and entities something as close as possible to universal access to all information in the system. Anything that might prove relevant is to be made available to any who might use it. This means, in practice, that lines drawn to separate different areas of concern (and define areas of specific regulatory oversight) become blurred as a matter of institutional necessity.

In a 2006 essay, Danielle Citron and Frank Pasquale argued that fusion centers pose a legal and privacy conundrum because they operate “at the seams of state and federal laws” for the sake of circumventing traditional accountability measures.20 An oversight apparatus that assumes there are solid walls between discrete agencies cannot regulate them. The Privacy Act of 1974, for example, specifically regulates what kinds of information federal agencies are allowed to keep. But state agencies like fusion centers—though they are staffed, funded, and directed by the federal government and fully integrated into it—operate outside that jurisdiction and circulate information that the FBI can therefore access without officially possessing.

This loophole is built into the network’s structural logic at every level. While law enforcement and intelligence agencies are required to keep personally identifiable data only in the demonstrated interest of criminal investigation, the Information Sharing Environment is designed to let agencies and agents access otherwise inaccessible information in databases held by third parties. Moreover, the list of information that can legitimately be accessed, and therefore circulated, is almost endless. As ­Robert O’Harrow Jr. has reported in the Washington Post, fusion centers collect traffic tickets, property records, identity-theft reports, driver’s license listings, immigration records, tax information, public-health data, criminal-justice sources, car-rental records, credit reports, postal and shipping records, utility bills, gaming data, insurance claims, data-broker dossiers, and more.21 The information age makes the amount of broadly accessible information incomprehensibly massive, and the Information Sharing Environment aims to circulate it within the government with as little friction as possible.

In addition, partnering with private industry has long been a priority. Information Sharing Environment guidelines specify that “critical infrastructure” is a prime terrorist target and mandate that because “the private sector owns and operates an estimated 85 percent of infrastructure and resources that are critical,” agencies should “develop effective and efficient information sharing partnerships with private sector entities.” In practice, this has meant incorporating representatives of the private sector into fusion-center operations at the highest level. In a case seen to be exemplary for other fusion centers, a Boeing analyst is employed full time at the Washington Joint Analytical Center, where Boeing trades its “mature intelligence capabilities” for “real-time access to information from the fusion center.”22 Another Information Sharing Environment partnership, an FBI initiative called InfraGard, is intended to facilitate the sharing of information and intelligence between private business and the intelligence community. According to its website, it now has more than 45,000 members, including representatives of 350 of the Fortune 500 companies.23 But there is almost no oversight. The Critical Infrastructure Protection Act of 2002 specifically exempted information provided by the private sector from the Freedom of Information Act disclosure requirements, and ­InfraGard’s website notes that all its communications with the FBI and Homeland Security fall under “trade secrets” exemptions.

Jay Stanley of the ACLU has argued that there should be no “ ‘business class’ in law enforcement” and that “handing out ‘goodies’ to corporations in return for folding them into its domestic surveillance machinery” carries risks.24 But can this kind of quid pro quo be avoided when the point of the network is to facilitate information flow? According to a Congressional Research Service report,25 fusion-center leaders often feel that expanding the mission beyond counterterrorism is the best way to get the private sector and local law enforcement to buy in. To acquire data that might prove useful in counterterrorism investigations, in other words, the mission of the Information Sharing Environment had to creep. But while it’s not difficult to speculate about what sorts of bargains must be struck between government agencies hungry for information and third parties who hold that information, the broad rubric of counterterrorism cloaks the details of fusion-center activities in secrecy even as the structural logic of the centers places them outside traditional modes of government oversight.

It is difficult to know what kinds of information circulate through the Information Sharing Environment, since most fusion centers do not own or store records but simply provide access that allows information to be disseminated among partners. When the New Mexico ACLU filed an open-records request on New Mexico’s All Source Intelligence Center, for example, the lack of a “material product” meant there were no records to open.26 But as Citron and Pasquale have pointed out, “a critical mass of abuses and failures at fusion centers over the past few years makes it impossible to accept [privacy] assurances at face value.”27

More disturbing, Bruce Fein, a former associate attorney general in the Reagan administration, has gone so far as to argue that fusion centers conceive the business of gathering and sharing intelligence as “synonymous with monitoring and disparaging political dissent and association protected by the First Amendment.”28 Several examples have been well publicized—for instance, a Virginia fusion center named historically black colleges as potential terrorist hubs29—and at least one fusion-­center official has confirmed that the boundaries between what’s considered dissent and what’s considered terrorism are blurring. After police officers in Oakland, California, used wooden bullets, concussion grenades, and tear gas to break up an otherwise peaceful antiwar protest at the Port of Oakland in 2003, it was revealed that a California fusion center, the California Anti-Terrorism Information Center, had led the police to expect Black Bloc anarchists to make a violent effort to shut down the port. The spokesman for the center argued, “If you have a group protesting a war where the cause that’s being fought against is international terrorism, you have terrorism at the protest. You can almost argue that protest is a terrorist act.”30

Kim Taipale, a specialist in security technology, has argued that the defunding of the Information Awareness Office may prove in retrospect to have been a pyrrhic victory for civil liberties, since the visibility of its ambitions and scale also made it accountable.31 Americans are bound to be suspicious of domestic spy agencies that pattern themselves after all-­seeing eyes floating disembodied above pyramids, but the Information Awareness Office was comparatively open about its existence and purpose, and it strove to focus its mission in response to oversight concerns (changing the name of its program to Terrorist Information Awareness, for example). In contrast, the Information Sharing Environment has broadened the focus and name of its Common Terrorism Information Sharing Standards to Common Information Sharing Standards.

In one sense, the Information Sharing Environment is a medium tending toward unobstructed transmission; it is like an ocean that conducts whale songs for hundreds of miles. But in another sense, the ISE has created a very private pool of publicly circulating information. Simplified Sign-On, for example, gives those who qualify total access to “sensitive but unclassified” information—but it gives it only to them, and with only internal oversight on how that information is used. The problem is not simply that private information is now semi-public but that the information is invisible to anyone outside organizations that “need to share.”

Citron and Pasquale have suggested that if technology is part of the problem, it can also be part of the solution—that network accountability can render total information sharing harmless. Rather than futilely attempting to reinforce the walls that keep information private, publicly regulating how information is used can mitigate the trends that caused the problem in the first place. Immutable audit logs of fusion-center activity would not impede information sharing, but they would make it possible to oversee whom that information was shared with and what was done with it. In fact, it was John Poindexter, the director of the Total Information Awareness program, who first suggested this method of oversight, though even today, many fusion centers have no audit trail at all.32 Standardization and interoperability might also provide means of regulating what kinds of data could be kept. The technological standards that make information available to users can also facilitate oversight, as Poindexter himself realized.

What sustained the traditional idea of privacy was confidence that some information was private because it was never recorded. That expectation is outmoded. Today, everything may be recorded and then examined for meanings such as subversive intent. Because the era of information scarcity is over, so too is the particular sense of privacy that it sustained. But if we remember that privacy has always been an emerging right—always a declaration of what society found reasonable to expect and could legally enforce—then defending a new kind of privacy becomes a technical problem for technologists and jurists. We must first decide what we want.

Privacy has a surprising resilience: always being killed, it never quite dies. Contemporary information technologies are placing intolerable burdens upon the capacity of individuals and groups to seclude themselves. If privacy is to survive in a new era, we will need new countervailing technologies and new kinds of laws.

Aaron Bady teaches about privacy, publicity, and literature at the University of California, Berkeley, and writes about the same on his blog, Zunguzungu.

[1] Louis D. Brandeis and Samuel D. Warren, “The Right to Privacy,” Harvard Law Review 4, no. 5, December 15, 1890.

[2] Forty years later, as a Supreme Court justice, Brandeis argued in a famous dissent that privacy was implicitly protected by the Fourth and Fifth Amendments: “The makers of our Constitution understood the need to secure conditions favorable to the pursuit of happiness, and the protections guaranteed by this are much broader in scope, and include the right to life and an inviolate personality—the right to be left alone—the most comprehensive of rights and the right most valued by civilized men. The principle underlying the Fourth and Fifth Amendments is protection against invasions of the sanctities of a man’s home and privacies of life. This is a recognition of the significance of man’s spiritual nature, his feelings, and his intellect. Every violation of the right to privacy must be deemed a violation of the Fourth Amendment. Now, as time works, subtler and more far-reaching means of invading privacy will become available to the government. The progress of science in furnishing the government with the means of espionage is not likely to stop with wiretapping. Advances in the psychic and related sciences may bring means of exploring beliefs, thoughts and emotions. It does not matter if the target of government intrusion is a confirmed criminal. If the government becomes a lawbreaker, it breeds contempt for law. It is also immaterial where the physical connection of the wiretap takes place. No federal official is authorized to commit a crime on behalf of the government.” Olmstead v. U.S., 277 U.S. 438 (1928).

[3] Eli Pariser, The Filter Bubble: What the Internet Is Hiding from You (Penguin 2011), 43.

[4] Patricia L. Bellia, “The Memory Gap in Surveillance Law,” Chicago Law Review 75, no. 1 (2008), describing trends that “make indefinite data retention feasible for businesses and individuals alike.”

[5] See, for example, Nimrod Kozlovski’s “Designing Accountable Online Policing” in Cybercrime: Digital Cops in a Networked Environment (New York University Press 2006), describing how “investigators increasingly focus on ‘noncontent’ data such as traffic data and automated system logs, enabling them to create maps of associations, and to visualize non-trivial connections among events.”

[6] Siobhan Gorman, “NSA’s Domestic Spying Grows as Agency Sweeps Up Data,” Wall Street Journal, March 10, 2008: “According to current and former intelligence officials, the spy agency now monitors huge volumes of records of domestic emails and Internet searches as well as bank transfers, credit-card transactions, travel and telephone records. The NSA receives this so-called ‘transactional’ data from other agencies or private companies, and its sophisticated software programs analyze the various transactions for suspicious patterns. Two former officials familiar with the data-sifting efforts said they work by starting with some sort of lead, like a phone number or Internet address. In partnership with the FBI, the systems then can track all domestic and foreign transactions of people associated with that item—and then the people who associated with them, and so on, casting a gradually wider net. An intelligence official described more of a rapid-response effect: If a person suspected of terrorist connections is believed to be in a U.S. city—for instance, Detroit, a community with a high concentration of Muslim Americans—the government’s spy systems may be directed to collect and analyze all electronic communications into and out of the city.

[7] This term is so omnipresent that people rarely bother to even define it; for example, the National Criminal Intelligence Sharing Plan, which calls on local police to develop intelligence functions, uses it 30 times without ever saying what is meant.

[8] See, for example, Dana Priest and William Arkin’s description of this dynamic in Top Secret America (Little, Brown 2011). As The 9/11 Commission Report declared, “The concern about security vastly complicated information sharing. Information was compartmented in order to protect it against exposure to skilled and technologically skilled adversaries.”

[9] Senate Select Committee on Intelligence  and House Permanent Select Committee on Intelligence, Joint Inquiry into Intelligence Community Activities Before and After the Terrorist Attacks of September 11, 2001,  107th Congress, 2nd sess., 2002, S. Rep. 107-351, H. Rep. 107-792.

[10] Operation TIPS (Terrorism Information and Prevention System) was a Justice Department plan to create “a nationwide program giving millions of American truckers, letter carriers, train conductors, ship captains, utility employees, and others a formal way to report suspicious terrorist activity.” Section 880 of the 2002 Homeland Security Act specifically prohibited its implementation.

[11] As Gorman reports (Wall Street Journal, March 10, 2008), “ ‘When it got taken apart, it didn’t get thrown away,’ says a former top government official familiar with the TIA program. Two current officials also said the NSA’s current combination of programs now largely mirrors the former TIA project. But the NSA offers less privacy protection.”  In 2004, the Associated Press reported that many of the same research and researchers in the TIA project were now working in the Advanced Research and Development Activity office, which has since been enfolded into the Intelligence Advanced Research Projects Activity, a research office under the authority of the Director of National Intelligence (Associated Press, “US Still Mining Terror Data,” February 23, 2004). In The Watchers: the Rise of America’s Surveillance State (Penguin 2011), Shane Harris reports that two of the most important components of the TIA program were moved to ARDA: the Information Awareness Prototype System (which was renamed Basketball) and Genoa II (which was renamed TopSail.)

[12] Michael J. Sniffen “Controversial Terror Research Ongoing,” Associated Press, February 24, 2004; “What Is the Matrix? ACLU Seeks Answers on New State-Run Surveillance Program,” ACLU press release, October 3, 2003.

[13] The word “approach” comes from the 2004 Intelligence Reform and Terrorism Prevention Act (which mandated the ISE’s creation) and is often cited, as in the 2008 GAO report on the ISE.

[14] Building Beyond the Foundation: Accelerating the Delivery of the Information Sharing Environment, as prepared for delivery by Kshemendra Paul, PM-ISE, October 5, 2010.

[15] Fusion Center Guidelines, Developing and Sharing Information and Intelligence In a New Era, United States Department of Justice, United States Department of Homeland Security, 2006.

[16] Janet Napolitano, Hearing on Understanding the Homeland Threat Landscape—Considerations for the 112th Congress, February 9, 2011.

[17] Todd Masse, Siobhan O’Neil, and John Rollins, Fusion Centers: Issues and Options for Congress, congressional research report for Congress, July 6, 2007.

[18] John S. Pistole, remarks at National Fusion Center conference, March 7, 2007.

[19] John Rollins and Timothy Connors, State Fusion Center Processes and Procedures: Best Practices and Recommendations, Policing Terrorism Report 2, September 2007.

[20] Danielle Citron and Frank Pasquale, “Network Accountability for the Domestic Apparatus,” Hastings Law Journal 62 (2011): 1441­-1493.

[21] Robert O’Harrow Jr., “Centers Tap Into Personal Databases,” Washington Post, April 2, 2008. O’Harrow quoted Major Steven G. O’Donnell, the deputy superintendent of the Rhode Island State Police, as saying, “There is never ever enough information when it comes to terrorism … That’s what post-9/11 is about.”

[22] Alice Lipowicz, “Boeing to Staff FBI Fusion Center,” Washington Technology, June 01, 2007.

[23] http://www.infragard.net/

[24] Matthew Rothschild, “The FBI Deputizes Business,” The Progressive, March 2008.

[25] Todd Masse, Siobhan O’Neil, and John Rollins, Fusion Centers: Issues and Options for Congress, congressional research report for Congress, July 6, 2007.

[26] Hilary Hylton, “Fusion Centers: Giving Cops Too Much Information?” Time, March 9, 2009.

[27] For example, William E. Gibson, “US Accused of Spying on Those Who Disagree with Bush Policies,” South Florida Sun-Sentinel, January 20, 2006; Matthew Rothschild, “Rumsfeld Spies on Quakers and Grannies,” The Progressive online, December 17, 2005; Douglas Birch, “NSA Used City Police to Track Peace Activists,” Baltimore Sun, January 13, 2006; “FBI Targets School of the Americas Watch Activists,” Truthout press release, May 9, 2006.

[28] The Future of Fusion Centers: Potential Promise and Dangers: Hearing before Subcommittee on Intelligence, Information Sharing, and Terrorism Risk Assessment of the House Committee on Homeland Security, 111th Cong., 1st sess., April 1, 2009.

[29] The Virginia Fusion Center’s 2009 Virginia Terrorism Threat Assessment asserted that Richmond’s “diversity” and the presence of historically black universities “contributes to the continued presence of race-based extremist groups,” stating as well as that “University-based students groups are recognized as a radicalization node for almost every type of extremist group.” The document was leaked and first reported by Stephen Webster for The Raw Story (“Fusion Center Declares Nation’s Oldest Universities Possible Terror Threat,” April 6, 2009).

[30] Ian Hoffman, Sean Holstege, and Josh Richman, “State Monitored War Protesters,” Oakland Tribune, May 18, 2003.

[31] K. A. Taipale, “Data Mining and Domestic Security: Connecting the Dots to Make Sense of Data,” Columbia Science and Technology Law Review 2, December 2003.

[32] As Shane Harris reports in The Watchers (109), Poindexter “proposed an ‘immutable audit trail,’ a master record of every analyst who had used the TIA system, what data they’d touched, and what they’d done with it … to spot suspicious patterns of use … Poindexter wanted to use the TIA to watch the watchers.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.