Skip to Content
Uncategorized

The Future of Books

Jason Epstein was a publisher for more than 40 years. Now in retirement, he wants to replace Gutenberg with a digital press.
January 1, 2005

Jason Epstein worked in book publishing for more than 40 years. He was editorial director of Random House and founded Anchor Books, the New York Review of Books, the Library of America, and the Readers Catalog. Now in retirement, he wants to digitally reconstruct publishing, as digitization is re-creating the music industry.

I became a publisher by accident. When I entered Columbia College in 1945, I was only 17, but I found myself surrounded by veterans in their 20s, some still in their flight jackets and peacoats, many of them married, some with infants. Most of them were in a hurry to find careers and get on with their lives. Some, however, were incipient scholars, and I was fascinated by their worldly talk of Marvell and Donne, Pascal and Voltaire, James and Proust, and Joyce and Eliot. Some of my elders became my friends. For four years we formed an intense coterie of which I was the chief beneficiary because I joined it knowing nothing and acquired from it the rudiments of an education. I had no thought of a job, much less a career in business, certainly not one in book publishing. Thus it did not occur to me that my friends, thanks to the GI Bill, belonged to a large, unprecedented, and undiscovered market for serious books – a new phenomenon in the cultural and commercial life of the United States.

In September 1950, after wasting a year in graduate school, I was on my own financially. For want of a better plan, and with only the vaguest idea of what a book publisher actually did (I had recently seen a film called The Scoundrel, starring Noël Coward, about the ruin of a glamorous but dissolute book publisher), I applied to Doubleday’s training program, which promised to indoctrinate prospective publishers by rotating them through various departments. Although Doubleday’s personnel manager insisted that I was unsuited for the program, Ken McCormick, the firm’s editor in chief, hired me nonetheless.

In those days, paperback publishing was an offshoot of maga­zine distribution. Every month a bundle of cheaply printed popu­lar novels, each selling for 25 cents, was delivered along with that month’s magazines to drugstores and news­stands around the country. Last month’s unsold paperbacks were collected, pulped, and reborn as next month’s (hence “pulp fiction”). I had been at Double­day for six months, long enough to grasp the essentials of the business, when I proposed to Ken a plan for another kind of paperback. One February afternoon, as we walked across Central Park, I asked, Why not sell paperbacks in bookstores instead of newsstands? We would publish the kinds of serious works that my friends and I had read at Columbia, but which were available only in hard covers and at pro­hibitive prices. These paperbacks could, I suggested, be slightly more expensive than mass-market pulps: we would break even when we had sold 20,000 copies instead of 100,000. Wouldn’t it make more sense to sell 20 copies of The Sound and the Fury at a dollar than one hardcover copy at ten dollars?

What I was proposing was a paperback program that would expand the market for publishers’ backlists – that is, books that sell year after year and in the aggregate contain nearly all that we think we know about ourselves and the world. In the 1950s, backlists were the life’s blood of publishing: backlisted titles had recouped their costs, and their sales provided book publishers with a steady stream of profit.

Ken agreed and suggested that I talk to people in production and sales and come up with a business plan. We decided to call the new series Anchor Books, after Double­day’s Aldine colophon, with its frisky dolphin wrapped around a weighty anchor. We began by testing the market with 20,000 copies of 12 titles in sturdy paper bindings, priced between 65 cents and $1.25. The first list included Joseph Conrad, Edmund Wilson, D. H. Lawrence, André Gide, and Stendhal. Within a year or two, nearly every publisher in New York and Boston had a line of “quality paperbacks,” which bookstores were selling by the millions. The “paperback revolution,” as it would be called, had begun.

The essential factors in the success of this format (new to the United States, although European publishers had been publishing quality paperbacks for some time) were an audience for serious reading created by the GI Bill and the 3,000 to 4,000 independent booksellers who constituted the retail market for books. Many of these stores were hardly more than gift shops carrying greeting cards, regional titles, and a few bestsellers, but perhaps a thousand booksellers in cities and major suburbs maintained deep backlist inventories and catered to the eclectic interests of sophisticated readers who found their way to the low-rent neighborhoods where many of the shops were located. Our marketing strategy was simple. We put Anchor Books displays wherever we could, hoping that readers would find them and tell their friends. The books sold themselves.

Hankering for a revolution
It was all too good to last. At first I did not notice that the publishing business, along with much else in American life, was being reshaped by the great postwar demographic shift from city to suburb. As their customer base disappeared, so did hundreds of city bookstores with their thousands of backlist titles. Today, there aren’t 50 independent retailers in the United States that stock 100,000 titles or more. By the mid-1960s, the new retail market, based largely on suburban malls where the dominant bookstore chains were paying the same rent as the shoe stores next door, could not afford to stock their costly shelves with slow-moving backlist inventories. Turnover was important to these chain outlets. Heavily promoted books by television celebrities and by well-known writers of formulaic thrillers and romances were what the chains wanted. Very soon, thousands of backlist titles were going out of print every year.

The effect of these new marketing conditions was to turn the industry upside down. Where previously publishers had depended on their backlists, now most of them survived precariously (if they survived at all) by scrambling after bestsellers. Celebrity ephemera were auctioned by their agents for dizzying guarantees, while the powerful retail chains demanded ever more discounts from publishers, forcing the smaller houses to merge with and be subsumed by the conglomerates that dominate the industry today. Publishers continued to produce as many books of real merit as ever, but as Calvin Trillin put it, their shelf life had deteriorated to somewhere between that of milk and yogurt. Book publishing began more and more to resemble the mass-market-magazine business.

In 1958, I left Doubleday for Random House. My arrangement was unusual. For many years I was the firm’s editorial director, but I was also free to pursue my own ventures. By the mid-1980s, I had started a few successful businesses for the same readers for whom I had created ­Anchor Books. I began to look for ways to bypass the marketing forces that were eroding publishers’ backlists. In 1986, with this problem in mind, I conceived the Readers Catalog, a directory of some 40,000 backlist titles that could be ordered through an 800 number (the Internet had not yet been commercialized). The idea was to re-create a medium-sized independent bookstore in the form of a printed catalogue the size of a big-city telephone directory. Sales were brisk – but my business plan was flawed. The average revenue per order was about $35, plus shipping and handling, but the cost of handling small orders was more than could be recouped. By the time the Internet was flourishing, I had decided not to put the Readers Catalog online but instead auctioned it off to Amazon.com and Barnes and Noble – warning them that their margins would not cover the cost of handling small orders for individual customers. (They have since lost millions of dollars while performing an invaluable service to publishers, writers, and readers.)

It was in the aftermath of the failure of the Readers Catalog that I saw the solution to the prohibitive expense of physically handling thousands of low-cost items. Books, like music, are among the few commercial products that can be reduced to digital files, stored, located, and transmitted electronically at virtually no cost. Publishers had been trying to sell electronic versions of their titles online since the early 1990s. They had failed because the programs were poorly designed and because most readers resisted the idea of reading books on their computer screens or on handheld gadgets. Imprinted paper, folded, gathered, and bound within covers, is still the most durable, readable, portable, and economic medium for books that are meant to be kept. It must be possible, I reasoned, to reconstitute a digi­tal file in the form of a library-quality paperback. What I imagined was the functional equivalent of an ATM – a device that would quickly print a book from a digital file, bind it, trim it, and deliver it to the reader at low cost.

A rudimentary print-on-demand technology already existed, consisting of a separate duplex printer, binder, and trimmer, but the equipment was expensive and cumbersome and required skilled operators. It was designed to function within the existing supply chain of the publishing industry, but for printings too small for a conventional high-speed press. I wanted something else – a free-standing, fully automatic machine that would bypass the entire Gutenberg system. A reader would select a file; the file would be transmitted over a secure network; and within minutes, the machine would print a single copy, in any language. The machine would deliver a book at less cost to the reader than books produced by more conventional means. By eliminating the physical supply chain, the new technology would offer readers a vastly greater selection of titles than existing technologies.

The 1950s “paperback revolution” in which I had been so bound up wasn’t a revolution at all – merely the introduction of a new format within the existing supply chain. I hankered for a true revolution, one that would maximize the world market for books and create unprecedented new efficiencies for publishers.

In 1999, I delivered three lectures at the New York Public Library, where I presented my vision of an electronic future and predicted that, sooner or later, such a machine would exist. (I reworked these remarks in my 2002 book Book Business: Publishing Past, Present, and Future.)

At the time, the mall chains had reached the limits of their expansion. Accordingly, by the early 1990s, they were being replaced by the so-called superstores, Barnes and Noble and Borders – much larger, free-standing establishments whose prom­ise to carry large backlist inventories was often thwarted by costs that mandated instead the usual books of the moment, along with music, magazines, trinkets, and coffee bars. An alternative was more urgently needed than ever.

Getting beyond Gutenberg
What I did not know in 1999 was that the book machine I envisioned already existed. The next year, one of my lectures appeared in the New York Review of Books, where it was read by my friend Michael Smolens, an entrepreneur also interested in print-on-demand technology. He told me that such a machine was even at that moment making books in a small workshop in Missouri. Its inventor, Jeff Marsh, would welcome a visit from us. (Disclosure: Smolens, I, and a few others are now in business together: our company hopes to build a print-on-demand machine for less than $100,000.)

At Marsh’s workshop we watched a machine, about two-and-a-half meters long and half as high, receive a digital file, adjust itself to the dimensions of the desired book, and transmit the file to a duplex printer. The printed pages were then gathered and bound within a cover produced by a separate, four-color printer. The entire automatic process took about two minutes. The bound, 256-page book was next conveyed to a trimmer and finished, all without an operator.

It was a transcendent moment.

In the electronic future, everything ever published will be recoverable by searching on Google or sites like it (see “What’s Next for Google?”). Enthusiasts for any activity under the sun, booksellers, publishers, and eventually authors themselves will post digital files of texts on their sites. At their computers, readers will select books from an infinite library of many languages and transmit them to the nearest book machines, where they will collect the printed books at their convenience.

A post-Gutenberg system could be assembled now from existing technologies. But while the technologies exist, the commercial infrastructure to support them does not. Music publishers sell directly over the Internet to consumers who play tunes on devices like the iPod. But before book publishers can sell titles directly to readers, they will need to build thousands of book machines.

Unfortunately, the new system cannot be implemented without a viable market: none exists at the moment. One possible solution lies in the unprecedented ability of these new technologies to reach previously inaccessible markets: for example, the 47 million Americans for whom English is a second language but who have no convenient way to buy books.

Gutenberg was a Catholic entrepreneur who sold religious trinkets and printed indulgences before creating his famous Bible. He thought he could cure the schisms of the 15th century by distributing a uniform missal to all the churches of Europe. Instead, he helped create the Protestant Reformation.

The impact of today’s more powerful technologies can scarcely be imagined. What seems to me certain is that these technologies will soon overwhelm the obsolescent Gutenberg system and confront us once again with unprecedented risks and opportunities.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.