Telling Time by the Second Hand
It was with a perverse sense of familiarity that i read in the daily papers in January that catastrophe was once again on the way. “If a huge asteroid crashes into the middle of the Atlantic Ocean,” wrote Pulitzer Prize-winning reporter John Noble Wilford in The New York Times, “say goodbye to Broadway, the beach house on Long Island and just about everything else on the East Coast as far inland as the foothills of the Appalachians.” Washington Post reporter Kathy Sawyer was in a similar funk: “While the economy is booming and there are no major wars,” she wrote, “scientists have come up with something to fill the worry gap: If a space rock three miles in diameter slams into the Atlantic Ocean, it would produce a towering, high-velocity wave that would swamp most of the upper East Coast.”
This “revelation” came from researchers at the Los Alamos National Laboratory, who apparently had too much supercomputer time on their hands-what with the downsizing of the nuclear weapons programs and the disappearance of Star Wars-and so had run simulations on a one-in-ten-million-years asteroid plunking into the Atlantic. The Los Alamos public relations people knew a good story when it fell from the heavens and had thrown a press conference to announce the results.
The ensuing articles were further work in a genre that can be called “Death from Above” stories, after a phrase made famous by the best-selling science writer Timothy Ferris in a New Yorker article of January 1997, that the editors, alas, saw fit to run as a tie-in to an upcoming television disaster movie of the same ilk. The gist of “Death from Above” stories is that asteroids and comets could do to us any day what they apparently did to the dinosaurs. While there are reputable scientists who confess to losing sleep over this, they will admit, if pressed, that in the recorded history of humanity no human being has ever been shown to have been killed by an incoming astronomical object.
“Death from Above” stories make good copy, but they are hardly science. On the contrary, they are pedagogical examples of the relationship between science and the press, which can be described as one of mutually exclusive philosophical challenges. Put simply, the role of the daily press is to report the news, which is by definition what’s new. The job of the newspaper science reporter is to write up the implications of the latest scientific paper-i.e., the one that officially comes out today-although only if the paper is sufficiently at odds with conventional wisdom or has some commentary on the human condition, which means sex, health, sports, aging, or money.
This, however, is fundamentally at odds with the nature of science, which is to establish what constitutes reliable knowledge and does so in fits and starts-false starts, generally, since most of them are either wrong or meaningless. In August 1996, for instance, when NASA scientists announced that they had discovered signs of life in meteorites that had apparently come from Mars (the genre here might be dubbed “We Are Not Alone” stories), a good bookmaker would have put the odds of them being right at 1000-to-1 against, because there were so many ways the researchers could have misinterpreted their data, and so few ways (one) that they could have had what they said they had. Remarkable results, after all, demand remarkable evidence, and the NASA data were decidedly unremarkable. The science reporters covered the story with the complete lack of skepticism it demanded if the goal was to keep it on the front page for a few weeks. Within six months it was obvious even to employees of NASA that the apparent signs of life were most likely artifacts of the experimental technique.
John Ziman, a physicist and philosopher of science, has suggested that the front line of scientific research, where reporters make their livelihood, is simply not the place to find reliable knowledge. He describes it aptly as “the place where controversy, conjecture, contradiction, and confusion are rife.” He quantifies his point by suggesting that the physics in undergraduate textbooks is 90 percent true, while that in the primary research journals is 90 percent false.
I once did my own calculation to this effect: In 1987, back when high-energy physics was still a viable field of research, I wrote an article for Discover magazine in which I tabulated all the discoveries in the field that had made The New York Times over the preceding decade. In nine of 12 cases the researchers had “discovered” something that turned out not to exist. The three discoveries that panned out were predictions of a theory that had been repeatedly validated and was already sufficiently conventional to be called the Standard Model. Not surprisingly, the nine errors were the more interesting claims, which means the better stories, because they were all at odds with the Standard Model or extended it deep into the unknown.
In good science, error is simply part of the game. No progress is made without it. “Science thrives on errors-cutting them away one by one,” is how Carl Sagan put it. “False conclusions are drawn all the time, but they are drawn tentatively. Hypotheses are framed so they are capable of being disproved. A succession of alternative hypotheses is confronted by experiment and observation. Science gropes and staggers toward improved understanding.” Michael Ghiselin, a biologist and MacArthur Fellow, describes error as part of the overhead of doing research. “The best scientists,” he suggests in his 1989 book Intellectual Compromise, The Bottom Line, “can even be expected to make more mistakes than do the mediocre ones, for the best scientists do the most research. It is they who will work on the most difficult problems, and venture into the areas of greatest risk.”
The challenge for the science reporter is how to deal with the onslaught of fascinating-and quite likely erroneous-results. At times this chronic problem shows up in an acute episode like the infection known as cold fusion. In 1989, during the three months of hysteria surrounding the outbreak of cold fusion, a then-Washington Post science reporter described daily science reporting, especially during such periods of extreme activity, as akin to playing goalie in a hockey match. Pucks come whizzing at you fast and furious, he said, and most you block, but a few get by.
What is the solution? the science reporter can hedge his bets through the liberal use of caveats, but the editorial philosophy of daily newspapers works against caveats. When reporters add them to a story, editors are likely to move them to the end. Once at the end, the caveats can be easily cut when editors find themselves short on space.
Another way around the problem of sorting the seed from the husks is for the reporter simply to throw up his hands and say, “It’s not my job, man.” My favorite recent example of the lack of concern that some reporters attach to publishing bad science is that of the New York Times reporter who allegedly told a government expert on nuclear waste technology that his job as a reporter was not to decide what’s good science and what’s bad, but what’s a good story. (I say allegedly, because the Times reporter refused to speak on the record when asked to confirm or deny the remark.) He then went on to write a front-page Times article on a Los Alamos researcher who had concocted a theory that the proposed nuclear waste dump at Yucca Mountain might someday undergo a nuclear explosion. The buried radioactive waste would simply have to leach from its containers and form itself into a bomb with the help of natural forces. This required, in effect, nearly divine (or perhaps satanic) intervention. The Times reporter, however, did make sure that no pucks would slip into the net from behind by adding the requisite caveats and suggesting that even if the work was simply wrong (which it was) and could be debunked (which it would be), “the existence of so serious a dispute so late in the planning process [for the repository] might cripple the plan or even kill it.” It was the one irrefutable statement in the article.The best way, however, for science reporters to deal with the problem of giving publicity to the erroneous is to rely on experts. As Ghiselin puts it: “In the popular press, we are always reading that most scientists believe’ such and such. Who cares what most scientists believe? We want to know what the best ones believe, especially those in the best position to evaluate the topic at issue.” This last clause is a kicker. Most science reporters have their share of reliable researchers whom they consider experts, but it’s unlikely that any one of these will be an expert in the precise discipline of the latest research. What’s more, the more spectacular the announcement, the more likely that a scientist’s expertise will become problematic. If the discovery is truly revolutionary-which is to say, paradigm-busting-then by definition any scientist on the “wrong” (conventional) side of the paradigm is likely to lack sufficient expertise to understand all the ways the reported work is likely to be wrong.
Consider the cold fusion episode. Within three weeks of the purported discovery of room temperature nuclear fusion by researchers at the University of Utah, the pursuit had devolved into a nuclear version of the emperor’s new clothes. On one side were those scientists who believed Nobel laureate Luis Alvarez’s adage: “Only trust what you can prove.” They pointed out repeatedly that no reliable data existed to support the claim of cold fusion-let alone prove it-and that certain fundamental experimental procedures had been consistently ignored. The press treated these scientists as being firmly entrenched on the wrong side of the “new” paradigm. After all, most of them were nuclear physicists who had spent long years not discovering cold fusion; therefore they must be jealous. The rest of the skeptics were chemists, also tarred by their failure to discover cold fusion. That they did not embrace the new finding could only be because of hopeless self-interest.
Judgments like these render science reporting on most controversial subjects perilously close to anti-intellectualism. Science reporters tend to be fans of science who sincerely want to believe that there was once life on Mars, or that fusion power can be achieved in a glass of water. The experts have been trained to be critical, and they are easily seen as the arrogant eggheads we all disliked in junior high school. Non-experts quickly emerge to fill the vacuum, and they become invaluable resources to the reporter. Not only can you find a huge number of non-experts on any given subject, even a new one, but they are considerably more willing to give a bogus idea the benefit of the doubt, particularly if they stand to get funding to pursue research on the subject should funding agencies decide to go that route.
Although it would help if science reporters and their editors were more skeptical and relied more heavily on real experts, I’m not hopeful that the press/science paradox can be resolved. Indeed, because the press is primarily interested in the unconventional and the spectacular (“Man Bites Dog!”), it will always be easier to get press with bad science than with good. Bad science is inevitably more sensational than good science. Bad science has no boundaries: researchers can be sensationally wrong in an infinite variety of ways, whereas they can be right only in ways that are severely bounded by reality.
This is why even high-end journalism favors bad science: Bad science is the better story. So it is that a Princeton engineer who does ESP research gets five pages in The New York Times Magazine. A pair of Florida researchers who suggest that AIDS can be carried by insects, even though the disease doesn’t fulfill any of the requirements for a vector-borne disease, can get eight pages in The Atlantic Monthly. A theory that electromagnetic fields from power lines can cause cancer, even though the theory defies the known laws of physics and much of what we know about biology, can get 100 pages in The New Yorker. And these are the most literate publications in the country.
But if science writers can’t afford too much skepticism for fear of losing their jobs, readers, at least, can afford to be skeptical-and should be. As for me, I try to get through the morning papers by reminding myself of an old saying about the press. It goes something like this: “Trying to tell what’s going on in the world by reading the daily newspapers is like trying to tell what time it is by looking at the second hand of a clock.”
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
Meet the people who use Notion to plan their whole lives
The workplace tool’s appeal extends far beyond organizing work projects. Many users find it’s just as useful for managing their free time.
Learning to code isn’t enough
Historically, learn-to-code efforts have provided opportunities for the few, but new efforts are aiming to be inclusive.
Deep learning pioneer Geoffrey Hinton has quit Google
Hinton will be speaking at EmTech Digital on Wednesday.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.