Beyond these programs, additionally, there exist all the data-mining applications currently employed in the private sector for purposes like detecting credit card fraud or predicting health risks for insurance. All the information thus generated goes into databases that, given sufficient government motivation or merely the normal momentum of future history, may sooner or later be accessible to the authorities.
How should data-mining technologies like TIA be regulated in a democracy? It makes little sense to insist on rigid interpretations of FISA. This isn’t only because when the law was passed by Congress 30 years ago, terrorist threats on al Qaeda’s scale did not yet exist and technological developments hadn’t gone so far in potentially giving unprecedented destructive power to small groups and even individuals. Today’s changed technological context, additionally, invalidates FISA’s basic assumptions.
In an essay published next month in the New York University Review of Law and Security, titled “Whispering Wires and Warrantless Wiretaps: Data Mining and Foreign Intelligence Surveillance,” K. Taipale, executive director of the Center for Advanced Studies in Science and Technology Policy, points out that in 1978, when FISA was drafted, it made sense to speak exclusively about intercepting a targeted communication, where there were usually two known ends and a dedicated communication channel that could be wiretapped.
With today’s networks, however, data and increasingly voice communications are broken into discrete packets. Intercepting such communications requires that filters be deployed at various communication nodes to scan all passing traffic with the hope of finding and extracting the packets of interest and reassembling them. Thus, even targeting a specific message from a known sender today generally requires scanning and filtering the entire communication flow in which it’s embedded. Given that situation, FISA is clearly inadequate because, Taipale argues, were it to be “applied strictly according to its terms prior to any ‘electronic surveillance’ of foreign communication flows passing through the U.S. or where there is a substantial likelihood of intercepting U.S. persons, then no automated monitoring of any kind could occur.”
Taipale proposes not that FISA should be discarded, but that it should be modified to allow for the electronic surveillance equivalent of a Terry stop – under U.S. law, the brief “stop and frisk” of a person by a law enforcement officer based on the legal standard of reasonable suspicion. In the context of automated data mining, it would mean that if suspicion turned out to be unjustified, after further monitoring, it would be discontinued. If, on the other hand, continued suspicion was reasonable, then it would continue, and at a certain point be escalated so that human agents would be called in to decide whether a suspicious individual’s identity should be determined and a FISA warrant issued.
To attempt to maintain FISA and the rest of our current laws about privacy without modifications to address today’s changed technological context, Taipale insists, amounts to a kind of absolutism that is ultimately self-defeating. For example, one of the technologies in the original TIA project, the Genisys Privacy Protection program, was intended to enable greater access to data for security reasons while simultaneously protecting individuals’ privacy by providing critical data to analysts via anonymized transaction data and by exposing identity only if evidence and appropriate authorization was obtained for further investigation. Ironically, Genisys was the one technology that definitely had its funding terminated and was not continued by another government agency after the public outcry over TIA.
Home page image is available under GNU Free Documentation License 1.2. Caption: Original logo of the now-defunct Total Information Awareness Office, which drew much criticism for its “spooky” images.