In its legal fight against the FBI over iPhone security, Apple has made just about every argument it can credibly make. Two of them are particularly important. It has made a very good argument that the FBI cannot use an ancient federal statute called the All Writs Act to force it to create custom firmware to unlock the San Bernardino shooter’s iPhone. (In a separate case, a federal judge in New York agreed with Apple on Monday that the act could not be used to compel the company to unlock a drug trafficker's iPhone.) Apple has also argued that “the First Amendment prohibits the government from compelling Apple to make code.” The idea here is that computer code is a kind of speech, and that coercing Apple to create code would be forcing the company to produce speech, in violation of the First Amendment.
Apple's First Amendment argument is fascinating and seductive, particularly for those who sympathize with Apple’s stand against the feds. Apple has told the court that “under well-settled law, computer code is treated as speech within the meaning of the First Amendment.” Unfortunately, it’s wrong about that. The Supreme Court has never accepted that code is protected like speech. Importantly, the idea that Code = Speech is dangerous and must be rejected.
On its face, the idea that Code = Speech has a lot of appeal. Speech is made up of words, while code at bottom is made up of numbers, but they are both communications of a sort, even if code can often be understood only by computers. First Amendment law has embraced arguments of this sort in the past. Some lower courts have even suggested that Code = Speech. More successfully, critics of campaign finance regulations have argued that “money is speech,” a strategy that resulted in the much-criticized Citizens United ruling in 2010. The Supreme Court’s equation of money with speech has made regulation of money-driven political corruption very difficult. If courts were to accept the simple proposition that “Code = Speech,” regulation of our digital society would become very difficult as well, because so much of our society depends on computer code to function. Rather than accepting these simple but dangerous shortcuts, in a democracy, we need to make decisions about our information and technology policy through the political process rather than through constitutional litigation.
The problem with both “Money = Speech” and “Code = Speech” is that they address the wrong question. It’s common for people, and even courts, to ask whether a particular human activity (flag burning? nude dancing?) is “speech,” as if the determination that something is “speech” is the real question. But it’s not. The right question to ask under the law is whether the government is regulating something within the category of expression protected by the First Amendment, which protects the “Freedom of Speech, or of the Press.” That’s a wordy thing to say, which is why a lot of First Amendment lawyers (including some members of the Supreme Court) condense it down to “is X speech?”
But that is a mistake for several reasons. First, the First Amendment doesn’t actually give us the right to speak freely. Instead, it stops the government from regulating in ways that violate our freedom of speech. That’s an important distinction, because it focuses our attention on what the government is actually doing (censoring newspapers? banning pornography?) rather than on the importance (or metaphysical “speechiness”) of a human activity.
Second, asking about “speechiness”—the central question of asking whether code is speech—makes little sense. Under the law, the “speechiness” of an activity bears only a tenuous relationship to whether the First Amendment actually protects it. Of course, the First Amendment protects lots of things that are made up of words (e.g., singing, newspapers, books, and e-mails), and doesn’t protect lots of things that aren’t made up of words (e.g., speeding and murder). But lots of things that aren’t composed of words are also protected by the First Amendment against government regulation (e.g., ballet, nude dancing, photography). At the same time, there are lots of things we do with words that don’t receive protection, either (e.g., insider trading, asking someone to murder your spouse, sexual harassment). What matters, in the end, isn’t the metaphysics of “speechiness,” but whether a government regulation of an activity threatens the traditional values of free expression—political dissent, art, philosophy, and the practices of self-government.
Where does this leave us, then, when we’re considering the regulation of code by the government? The right question to ask is whether the government’s regulation of a particular kind of code (just like regulations of spending, or speaking, or writing) threatens the values of free expression. Some regulations of code will undoubtedly implicate the First Amendment. Regulations of the expressive outputs of code, like the content of websites or video games, have already been recognized by the Supreme Court as justifying full First Amendment treatment. It’s also important to recognize that as we do more and more things with code, there will be more ways that the government can threaten dissent, art, self-government, and the pursuit of knowledge.
But on the other hand, and critically, there are many things that humans will do with code that will have nothing to do with the First Amendment (e.g., launching denial of service attacks and writing computer viruses). Code = Speech is a fallacy because it would needlessly treat writing the code for a malicious virus as equivalent to writing an editorial in the New York Times. Similarly, if companies use algorithms to discriminate on the basis of race or sex, wrapping those algorithms with the same constitutional protection we give to political novels would needlessly complicate civil rights law in the digital age. It’s easy to argue that Code = Speech, but accepting that argument would create a mess, and an avoidable one at that. It’s harder to look at what the government is trying to do, and harder to figure out whether this is in conflict with the values the First Amendment protects, but that’s the way the law works. The hard way is also far more preferable to giving tech companies whose businesses run on code a free pass from the kinds of meaningful regulation we’ve imposed upon other companies since the New Deal.
To be fair to Apple, there is one narrow way in which the order sought by the FBI could be seen as a violation of the First Amendment. iPhone technology is complicated, but if the FBI is seeking to have Apple trick the iPhone into accepting a software update by falsely promising that the software is legitimate, the situation might change a little. From this perspective, Apple would be being forced to lie to the phone (and by extension its user). This compelled lying would occur notwithstanding the relationship of trust between Apple and its customers on which the security of our digital age depends. From this narrow perspective, making Apple write and disseminate this particular code—making it lie to one of its customers—could be seen as a kind of compelled speech, and that could violate the First Amendment. But this wouldn’t be the case merely because (as Apple assumes) Code = Speech and the FBI was compelling the creation of code. Instead, it would be offensive to the First Amendment because the particular act being compelled—authenticating a security update as true when it was false—would be compelled false communication in a relationship of trust. The law on this narrow question is underdeveloped, but it could allow Apple to win on free speech grounds.
However, Apple doesn’t make this narrower argument clearly in its brief. Nor does it need to. Apple’s arguments under the All Writs Act are strong, and the court should accept them. In so doing, it would avoid the seductive trap of the Code = Speech fallacy. The digital age requires us to have the flexibility to regulate code, just as we have long needed the flexibility to regulate workplace safety and discrimination. The decisions about what kinds of regulation are appropriate (including what kinds of technical assistance companies can be forced to supply under court orders) should be questions we put to the political process. The future of our democracy is not threatened by these kinds of policy debates. But if we were to accept the fantasy that Code = Speech, we would be putting our ability to regulate our fast-changing digital society in peril.
Neil Richards, a law professor at Washington University in St. Louis, is the author of Intellectual Privacy: Rethinking Civil Liberties in the Digital Age.
Toronto wants to kill the smart city forever
The city wants to get right what Sidewalk Labs got so wrong.
Chinese gamers are using a Steam wallpaper app to get porn past the censors
Wallpaper Engine has become a haven for ingenious Chinese users who use it to smuggle adult content as desktop wallpaper. But how long can it last?
Yann LeCun has a bold new vision for the future of AI
One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.
The US military wants to understand the most important software on Earth
Open-source code runs on every computer on the planet—and keeps America’s critical infrastructure going. DARPA is worried about how well it can be trusted
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.