I have a troll. Writing as @zdzisiekm, or “Gus,” or under other names, he has commented on stories on TechnologyReview.com 6,386 times and counting as of April 2017. As trolls go, he is unfailingly polite, and he doesn’t violate our site’s terms of service. Instead, he is reflexively, tendentiously wrong about a single topic, again and again. Gus is angry about our reporting on global warming and renewable energy technologies. His objections are notionally scientific, but they have a strongly ideological flavor.
Four years ago, commenting on “Climate Change: The Moral Choices,” @zdzisiekm characteristically wrote, “Having studied the relevant science literature quite extensively and in depth—I read hundreds of papers on the subject—there is no real ‘climate change threat.’ It’s all trumped up—the actual published peer-reviewed science is clear on this … This is because in some countries [economists] are so keen to switch the economy away from fossil fuels, they’ll go with any lie …”
Over our long association, Gus hasn’t changed. Last January, after reading “What’s at Stake as Trump Takes Aim at Clean Energy Research," he remarked, “None of the solutions fostered by the American Left a.k.a. the Democrats are affordable, safe, or … reliable. Adding intermittent energy sources to the grid has one effect only: it increases the cost of energy … As to safety, ask millions of bats and birds killed, blinded and fried in flight by windmills and solar installations. Ask people inconvenienced by the incessant annoying noise made by the windmills. Neither have these technologies created jobs … other than in China.”
It’s personal for @zdzisiekm; our interactions feel intimate and overheated. He has often denigrated my judgment and disparaged my qualifications. “This is really not your field,” he recently wrote me in an e-mail.
The Coral Project
By Cass Sunstein and Adrian Vermeule
Journal of Political Philosophy
By Harry G. Frankfurt
Raritan Quarterly Review
I know who Gus is, because I tracked him down. We ask readers to provide some personal information before they can comment, and he wasn’t hard to find. My troll is a sixtysomething technical advisor to the IT department of a large public university in the Midwest. He has not one but two PhDs—in electrical engineering and physics. He writes good research about computer architecture and bad poetry about cats. (I agreed not to use @zdzisiekm’s real name for this story. “I know you know who I am,” he said, “but I cherish my anonymity, and I don’t want people to throw bricks at my window or dent my car.”)
When I asked Gus why he wastes so much time and spirit commenting on our site, he replied, “It doesn’t take much of my time at all. I’ve got a personal database that I can quickly search for specific articles on various subjects of which I have, by now, tens of thousands.” This is true. Like many trolls, @zdzisiekm cuts and pastes the same memes into many comments. He especially likes a post that begins, “All global warming seen since 1880 has been less than the natural centennial global temperature variability,” followed by a cherry-picked list of papers from obscure journals with little or no peer review, meant to leave the impression that there is scientific debate about the causes of climate change.
Quizzed about his motives, Gus answered: “These are contentious and partisan issues. Let’s not kid ourselves that they are not. This is precisely why I would expect balance in reporting on these topics, especially of TR. I suggested in the past that it may be a good idea to publish opposite views, side by side, as WSJ does sometimes. If TR did so, why, there’d be less reason for me to comment. In contrast, TR has been rather biased in its climate and energy articles.” I tried to explain that we can’t publish the “opposite views”—that climate is not affected by industrial emissions, and that if global warming did turn out to be real, humans could effectively respond when it became a problem—because those views are not true. To no avail: @zdzisiekm is a hard scientist; I am an ignorant editor.
We receive comments similar in their scorn, if different in politics, from readers who believe we publish the “PROPAGANDA AND LIES” of Monsanto and other creators of genetically modified organisms, or who are convinced we suppress the truth about the “filthy and unsound practice of vaccination” and its links to autism. What all such readers share is a conspiracist point of view: they think the scientific or economic consensus is in some way a hoax; that journalists and academics are gatekeepers who enforce a dangerous orthodoxy, often for personal gain or party benefit; and that honest commenters must demonstrate that the Opposition cannot be silenced. Not all commenters on TechnologyReview.com are like this, but in recent years those who are have become more aggrieved, and they have discouraged other readers from commenting.
Our unhappy experiences with comments are common to most publishers. During the U.S. elections of 2016, when commenters were especially intemperate (either sincerely so, or because they had been paid to post, or were not humans at all but bots), the problem grew acute. Comment sections are now the digital spaces publishers have ceded to trolls, cranks, and conspiracy theorists of all kinds. Why do commenters do it? In “Conspiracy Theories,” the legal scholars Cass Sunstein and Adrian Vermeule attribute conspiracist thinking to feelings of impotence: such theories are “especially likely to appeal to people who are cynical about politics, who have lower self-esteem, and who are generally defiant of authority.” Commenting makes them feel less powerless and irritable. But why should a publisher put up with Gus or any troll? Why indulge them? What’s in it for me?
Monomaniacal and grumpy impulses
While conceding that individual comments mostly have little value, defenders of commenting adduce three benefits to the activity. They argue that comments are the digital homologue of letters to the editor, and can be of intrinsic interest; that they are a way of “listening to your users,” providing vital feedback about what are, in the end, products; and that comments serve business interests by goosing various measurements of reader engagement such as time on site or return visits, which in turn improve the performance of ads or the likelihood of selling subscriptions or memberships. In reality, the fraction of publishers’ audiences who comment is so small and unrepresentative that only the first argument is valid, and then only on moderated sites with more or less knowledgeable readers, responding to quality journalism and information.
The reasons why publishers turn off comments are telling: since 2014, Vice, Recode, Reuters, Popular Science, The Week, Mic, The Verge, USA Today’s FTW, and many other sites have shuttered comments because they were too much work for little return. When NPR.org disabled commenting last August, the managing editor, Scott Montgomery, provided a quantitative rationale: “Far less than 1% of [a monthly audience of 25 to 35 million unique visitors] is commenting, and the number of regular comment participants is even smaller. Only 2,600 people have posted at least one comment in each of the last three months—0.003% of the 79.8 million NPR.org users who visited the site during that period.” The ratio of commenting to reading on TechnologyReview.com is similar to National Public Radio’s: in 2016, about 3,000 people commented on stories, out of 21,205,603 users of the site, making up just 0.014 percent of our total traffic. More anecdotally, those who did comment were more like @zdzisiekm than our larger audience: older, more monomaniacal, and grumpier.
In short, commenters aren’t representative, and they’re not numerous enough to meaningfully improve engagement. Worse, their comments demand constant pruning or deletion by dedicated staff or companies that specialize in beating back trolls, lest publishers acquiesce to nonsense or worse. Jonathan Smith, Vice.com’s editor in chief, was more blunt than the civic-minded Montgomery when he explained why he was done with comments: “We don’t have the time or desire to continue monitoring that crap moving forward.”
Those sites that remain committed to comments have generally followed a limited number of strategies. Smaller publishers that disabled commenting on their own sites are reconciled to the fact that discussions moved to Facebook, Twitter, and Instagram. Recode’s Kara Swisher said, “Things have changed; you have to change with them. Social media is just a better place to engage a smart audience that’s not trolling. We got into a lot of trouble in our comments on different stories—attacks on our writers, just stupid things; it wasn’t smart.” Comments in social media are sometimes more civil, because many people use their real identities, which discourages trollish impulses. Larger publishers that choose to preserve on-site comments, including the New York Times, the Guardian, and the Washington Post, often constrain the problem by limiting either the number of stories with comments, the amount of time readers have to comment, or both. For instance, only 10 percent of stories on NYTimes.com have comments, and commenting is typically closed after 24 hours. Limiting the number of comments makes it possible for moderators to approve, reject, promote, or demote the best or the worst.
Technologists, as they will, have offered technological solutions to the problem of comments. The Coral Project, a collaboration between the Washington Post, the New York Times, and the Mozilla and Knight Foundations, provides open-source tools for newsrooms that want to build better commenting systems, including “Ask” and “Talk” functions. Perspective, created by Google’s Counter-Abuse Technology Team and Jigsaw, a technology incubator at Alphabet that addresses challenges to free speech, uses machine learning to score how much any comment might tend to degrade or enhance a conversation. Civil Comments forces communities to rate a comment before it is posted. Finally, in an interesting experiment, NRKbeta, the technology site of the Norwegian public broadcaster, requires would-be commenters to prove they have understood a story by answering three multiple-choice questions before they can comment.
As for us, last year I grew so wildly dispirited at how MIT Technology Review’s stories had become part of America’s endless, arid culture wars (and so frustrated with @zdzisiekm and a half-dozen other commenters) that we disabled commenting for four months in order to reimagine how we could host more enlightened and enlightening conversations. We, too, accepted that the most active commentary on our stories now occurred in social media, but we felt there was still a role for on-site comments. (Indeed, the two platforms can cross-fertilize each other in fruitful ways.) We believed that good comments could adorn and improve our journalism. But we suffered no illusions that commenters were representative of our broader readership or that comments served any direct business purpose. Building on Disqus and the Ask function in the Coral Project, our new strategy borrows widely from the solutions described above, and it is still a work in progress.
We decided, in imitation of the New York Times, that readers would comment on only a few stories and then only for a while. Stories that might repay good commentary, such as our major features, essays, and reviews, would have comments, but those that might inflame partisan wrangling would not. We would choose to think of comments, whenever possible, as integral to the story: we wondered if we could construct whole stories around comments, or seed a conversation by inviting our smartest, most informed sources to comment. No one was doing this precisely, but some of the expert commentary at Ars Technica and The Information inspired us. We wanted readers to vote comments up and down, as readers once did in Gawker’s Kinja. We knew that writers, Web producers, and the social media and community editor would have to be heavily involved in curating the comments; like the Economist, we wouldn’t launch a thread and walk away.
Finally, and most controversially, we decided that we wouldn’t hesitate to censor comments or ban readers if they debased the site. That is, even if comments were politely expressed and relevant, and otherwise met our commenting guidelines, we felt we should be free to suppress their authors if they trolled us, posted bullshit, hijacked a thread, or contradicted known evidence. Screw @zdzisiekm and his gang, unless they behave as heirs to the tradition of civilized commentary. There is no inherent right to comment unless readers conform to various duties and responsibilities.
How is our commenting strategy working? Gus has responded well to the new regime, although his mind remains unchanged. He still comments nearly every day, but he says, “On my side, I’ve learned to comment with more precision and less, let’s say, personal involvement.” He argues less aggressively and more honestly, and he cuts and pastes less and links to defensible research more. Recently, he thanked me “for being reasonable about the whole business of commenting.” We even have a bet: “If global temps drop all by themselves by 2030,” he says, “you owe me a dinner at a restaurant of my choice; otherwise I owe you one.”
Readers aren’t universally happy, of course. When were they ever? Not long ago, responding to a story about an important project to create a “subcritical facility” to test small, transportable molten-salt-cooled nuclear reactors (see “MIT’s Nuclear Lab Has an Unusual Plan to Jump-Start Advanced-Reactor Research"), “breister,” one of @zdzisiekm’s online pals, wrote, “Ah finally an article which did not disable comments. Censorship at its finest, complements [sic] of TR and their policy of squelching dissenting views.”
You can’t please everyone.
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
What to know about this autumn’s covid vaccines
New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.
Human-plus-AI solutions mitigate security threats
With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure
Next slide, please: A brief history of the corporate presentation
From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.