“The computer of the future: if Stan Williams of HP has his way, it will be a computer that assembles itself – in a beaker.” “Mining the Genome for New Drugs.” “Micromachines – The Next Big Thing.”
Intrigued? Actually, these are all five-year-old stories, featured on the cover of the September/October 1999 issue of Technology Review.
The days of the tech bubble are gone forever, but the cover stories linger on. Those precise topics could reappear this year or next with little fear of editorial embarrassment. It’s not even snarky to say so; it would be truly astonishing if the biggest technology challenges of 1999 weren’t challenges today. But beyond the old theme of plus ça change, these hardy perennials contain critical insights for innovators – if they know how to read (and reread) them.
Those insights have less to do with the technologies themselves than with the expectations they created, induced, and excited. Look at old issues of Technology Review or Wired, and you’ll swiftly realize that the past isn’t prologue: it’s a turbulent world of heady speculations and unfulfilled promises. Those “promises” are indispensable elements of the innovation ecosystem. When artfully calibrated against actual progress, they keep markets salivating and investment – of both financial and human capital – flowing. Reviewing the headlines of times past can help innovators constructively fine-tune that balance around their own inventions.
In technology journalism, the pattern is almost always the same: the promise of a technical invention provokes a swirl of speculation around its potential impact if and when it reaches the market. In other words, novel inventions breed bold intentions. But exactly what happens to Intel if computers start self-assembling in beakers? Which new drugs will be profitably brought to the surface by genome mining? How micro or nano will those machines actually become on their way to being the Next Big Thing? We don’t know the answers; we can’t know. All we can say with any confidence is that inventors have fervent expectations for progress in their fields. They’re investing their efforts and ingenuity accordingly.
Why is this so important? Because successful innovation – seeing invention through to adoption – isn’t just about managing technical breakthroughs; it’s about managing people’s expectations. Always. Credibly aligning technical progress with past promises is the central challenge confronting most innovators.
By far the most successful example is Moore’s Law. For almost 40 years, circuit densities have doubled every 18 to 24 months, just as Intel cofounder Gordon Moore predicted. In this case, the prediction itself, and the pace the semiconductor industry has historically set for itself in order to keep up with the prediction, have seamlessly blended into one. Moore’s Law is as much a sustaining ideology as an engineering insight.
But this magnificent exception proves the rule. What does recent history teach about the marriage of technical prediction and market expectation? Reread the September/October 1999 Technology Review, and the answer is obvious: innovators stink at managing expectations. They’re either overly optimistic or unduly pessimistic. They haven’t a clue what new costs their innovations will impose on potential users. They have no credible way to assess what needs will be most important to those users three years hence.
But the reason innovators should read these tales of technologies past is emphatically not to “learn from the lessons of history.” Rather, it’s to see how expectations have changed over time. What expectations were innovators trying to create? How fast are those expectations changing, compared to market conditions? Determining this “expectations calculus” is essential to managing innovation.
Take the story of gallium arsenide. For years, this exotic material was promoted in the press as a replacement for silicon in integrated circuits, with proponents touting its superior speed. But gallium arsenide is expensive, and Moore’s Law had long since trained the computing market to expect continual cost reductions, not increases. In this case, a careful analysis of the hype versus the economics would have nudged innovators back toward silicon or toward niches where gallium arsenide might be worth its significantly higher cost. In fact, that’s exactly what happened: gallium arsenide has become a key material in high-speed chips for cell phones and other telecommunications devices.
The history of technology predictions is a resource to be mined, not a pile of failed futurology to lampoon. Don’t save past issues of Technology Review to see what the magazine got right or wrong; treat TR and its conceptual cohorts as media that measure the expectations tomorrow’s innovators need to understand in order to exploit.
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
What to know about this autumn’s covid vaccines
New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.
Human-plus-AI solutions mitigate security threats
With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure
Next slide, please: A brief history of the corporate presentation
From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.