Skip to Content

Where computing might go next

The future of computing depends in part on how we reckon with its past.

IBM engineers at Ames Research Center
1961: IBM engineers give visitors from Ames Research Center a look at the future.NASA AMES/INTERNET ARCHIVE

If the future of computing is anything like its past, then its trajectory will depend on things that have little to do with computing itself. 

Technology does not appear from nowhere. It is rooted in time, place, and opportunity. No lab is an island; machines’ capabilities and constraints are determined not only by the laws of physics and chemistry but by who supports those technologies, who builds them, and where they grow. 

Popular characterizations of computing have long emphasized the quirkiness and brilliance of those in the field, portraying a rule-breaking realm operating off on its own. Silicon Valley’s champions and boosters have perpetuated the mythos of an innovative land of garage startups and capitalist cowboys. The reality is different. Computing’s history is modern history—and especially American history—in miniature.

The United States’ extraordinary push to develop nuclear and other weapons during World War II unleashed a torrent of public spending on science and technology. The efforts thus funded trained a generation of technologists and fostered multiple computing projects, including ENIAC—the first all-digital computer, completed in 1946. Many of those funding streams eventually became permanent, financing basic and applied research at a scale unimaginable before the war. 

The strategic priorities of the Cold War drove rapid development of transistorized technologies on both sides of the Iron Curtain. In a grim race for nuclear supremacy amid an optimistic age of scientific aspiration, government became computing’s biggest research sponsor and largest single customer. Colleges and universities churned out engineers and scientists. Electronic data processing defined the American age of the Organization Man, a nation built and sorted on punch cards. 

The space race, especially after the Soviets beat the US into space with the launch of the Sputnik orbiter in late 1957, jump-started a silicon semiconductor industry in a sleepy agricultural region of Northern California, eventually shifting tech’s center of entrepreneurial gravity from East to West. Lanky engineers in white shirts and narrow ties turned giant machines into miniature electronic ones, sending Americans to the moon. (Of course, there were also women playing key, though often unrecognized, roles.) 

In 1965, semiconductor pioneer Gordon Moore, who with colleagues had broken ranks with his boss William Shockley of Shockley Semiconductor to launch a new company, predicted that the number of transistors on an integrated circuit would double every year while costs would stay about the same. Moore’s Law was proved right. As computing power became greater and cheaper, digital innards replaced mechanical ones in nearly everything from cars to coffeemakers.

A new generation of computing innovators arrived in the Valley, beneficiaries of America’s great postwar prosperity but now protesting its wars and chafing against its culture. Their hair grew long; their shirts stayed untucked. Mainframes were seen as tools of the Establishment, and achievement on earth overshadowed shooting for the stars. Small was beautiful. Smiling young men crouched before home-brewed desktop terminals and built motherboards in garages. A beatific newly minted millionaire named Steve Jobs explained how a personal computer was like a bicycle for the mind. Despite their counterculture vibe, they were also ruthlessly competitive businesspeople. Government investment ebbed and private wealth grew. 

The ARPANET became the commercial internet. What had been a walled garden accessible only to government-funded researchers became an extraordinary new platform for communication and business, as the screech of dial-up modems connected millions of home computers to the World Wide Web. Making this strange and exciting world accessible were very young companies with odd names: Netscape, eBay, Amazon.com, Yahoo.

By the turn of the millennium, a president had declared that the era of big government was over and the future lay in the internet’s vast expanse. Wall Street clamored for tech stocks, then didn’t; fortunes were made and lost in months. After the bust, new giants emerged. Computers became smaller: a smartphone in your pocket, a voice assistant in your kitchen. They grew larger, into the vast data banks and sprawling server farms of the cloud. 

Fed with oceans of data, largely unfettered by regulation, computing got smarter. Autonomous vehicles trawled city streets, humanoid robots leaped across laboratories, algorithms tailored social media feeds and matched gig workers to customers. Fueled by the explosion of data and computation power, artificial intelligence became the new new thing. Silicon Valley was no longer a place in California but shorthand for a global industry, although tech wealth and power were consolidated ever more tightly in five US-based companies with a combined market capitalization greater than the GDP of Japan. 

It was a trajectory of progress and wealth creation that some believed inevitable and enviable. Then, starting two years ago, resurgent nationalism and an economy-­upending pandemic scrambled supply chains, curtailed the movement of people and capital, and reshuffled the global order. Smartphones recorded death on the streets and insurrection at the US Capitol. AI-enabled drones surveyed the enemy from above and waged war on those below. Tech moguls sat grimly before congressional committees, their talking points ringing hollow to freshly skeptical lawmakers. 

Our relationship with computing had suddenly changed.

The past seven decades have produced stunning breakthroughs in science and engineering. The pace and scale of change would have amazed our mid-20th-century forebears. Yet techno-optimistic assurances about the positive social power of a networked computer on every desk have proved tragically naïve. The information age of late has been more effective at fomenting discord than advancing enlightenment, exacerbating social inequities and economic inequalities rather than transcending them. 

The technology industry—produced and made wealthy by these immense advances in computing—has failed to imagine alternative futures both bold and practicable enough to address humanity’s gravest health and climatic challenges. Silicon Valley leaders promise space colonies while building grand corporate headquarters below sea level. They proclaim that the future lies in the metaverse, in the blockchain, in cryptocurrencies whose energy demands exceed those of entire nation-states. 

The future of computing feels more tenuous, harder to map in a sea of information and disruption. That is not to say that predictions are futile, or that those who build and use technology have no control over where computing goes next. To the contrary: history abounds with examples of individual and collective action that altered social and political outcomes. But there are limits to the power of technology to overcome earthbound realities of politics, markets, and culture. 

To understand computing’s future, look beyond the machine.

1. The hoodie problem

First, look to who will get to build the future of computing.

The tech industry long celebrated itself as a meritocracy, where anyone could get ahead on the strength of technical know-how and innovative spark. This assertion has been belied in recent years by the persistence of sharp racial and gender imbalances, particularly in the field’s topmost ranks. Men still vastly outnumber women in the C-suites and in key engineering roles at tech companies. Venture capital investors and venture-backed entrepreneurs remain mostly white and male. The number of Black and Latino technologists of any gender remains shamefully tiny. 

Much of today’s computing innovation was born in Silicon Valley. And looking backward, it becomes easier to understand where tech’s meritocratic notions come from, as well as why its diversity problem has been difficult to solve. 

Silicon Valley was once indeed a place where people without family money or connections could make a career and possibly a fortune. Those lanky engineers of the Valley’s space-age 1950s and 1960s were often heartland boys from middle-class backgrounds, riding the extraordinary escalator of upward mobility that America delivered to white men like them in the prosperous quarter-century after the end of World War II.  

Many went to college on the GI Bill and won merit scholarships to places like Stanford and MIT, or paid minimal tuition at state universities like the University of California, Berkeley. They had their pick of engineering jobs as defense contracts fueled the growth of the electronics industry. Most had stay-at-home wives whose unpaid labor freed husbands to focus their energy on building new products, companies, markets. Public investments in suburban infrastructure made their cost of living reasonable, the commutes easy, the local schools excellent. Both law and market discrimination kept these suburbs nearly entirely white. 

In the last half-century, political change and market restructuring slowed this escalator of upward mobility to a crawl, right at the time that women and minorities finally had opportunities to climb on. By the early 2000s, the homogeneity among those who built and financed tech products entrenched certain assumptions: that women were not suited for science, that tech talent always came dressed in a hoodie and had attended an elite school—whether or not someone graduated. It limited thinking about what problems to solve, what technologies to build, and what products to ship. 

Having so much technology built by a narrow demographic—highly educated, West Coast based, and disproportionately white, male, and young—becomes especially problematic as the industry and its products grow and globalize. It has fueled considerable investment in driverless cars without enough attention to the roads and cities these cars will navigate. It has propelled an embrace of big data without enough attention to the human biases contained in that data. It has produced social media platforms that have fueled political disruption and violence at home and abroad. It has left rich areas of research and potentially vast market opportunities neglected.

Computing’s lack of diversity has always been a problem, but only in the past few years has it become a topic of public conversation and a target for corporate reform. That’s a positive sign. The immense wealth generated within Silicon Valley has also created a new generation of investors, including women and minorities who are deliberately putting their money in companies run by people who look like them. 

But change is painfully slow. The market will not take care of imbalances on its own.

For the future of computing to include more diverse people and ideas, there needs to be a new escalator of upward mobility: inclusive investments in research, human capital, and communities that give a new generation the same assist the first generation of space-age engineers enjoyed. The builders cannot do it alone.

2. Brainpower monopolies

Then, look at who the industry's customers are and how it is regulated.

The military investment that undergirded computing’s first all-digital decades still casts a long shadow. Major tech hubs of today—the Bay Area, Boston, Seattle, Los Angeles—all began as centers of Cold War research and military spending. As the industry further commercialized in the 1970s and 1980s, defense activity faded from public view, but it hardly disappeared. For academic computer science, the Pentagon became an even more significant benefactor starting with Reagan-era programs like the Strategic Defense Initiative, the computer-­enabled system of missile defense memorably nicknamed “Star Wars.” 

In the past decade, after a brief lull in the early 2000s, the ties between the technology industry and the Pentagon have tightened once more. Some in Silicon Valley protest its engagement in the business of war, but their objections have done little to slow the growing stream of multibillion-dollar contracts for cloud computing and cyberweaponry. It is almost as if Silicon Valley is returning to its roots. 

Defense work is one dimension of the increasingly visible and freshly contentious entanglement between the tech industry and the US government. Another is the growing call for new technology regulation and antitrust enforcement, with potentially significant consequences for how technological research will be funded and whose interests it will serve. 

The extraordinary consolidation of wealth and power in the technology sector and the role the industry has played in spreading disinformation and sparking political ruptures have led to a dramatic change in the way lawmakers approach the industry. The US has had little appetite for reining in the tech business since the Department of Justice took on Microsoft 20 years ago. Yet after decades of bipartisan chumminess and laissez-faire tolerance, antitrust and privacy legislation is now moving through Congress. The Biden administration has appointed some of the industry’s most influential tech critics to key regulatory roles and has pushed for significant increases in regulatory enforcement. 

The five giants—Amazon, Apple, Facebook, Google, and Microsoft—now spend as much or more lobbying in Washington, DC, as banks, pharmaceutical companies, and oil conglomerates, aiming to influence the shape of anticipated regulation. Tech leaders warn that breaking up large companies will open a path for Chinese firms to dominate global markets, and that regulatory intervention will squelch the innovation that made Silicon Valley great in the first place.

Viewed through a longer lens, the political pushback against Big Tech’s power is not surprising. Although sparked by the 2016 American presidential election, the Brexit referendum, and the role social media disinformation campaigns may have played in both, the political mood echoes one seen over a century ago. 

We might be looking at a tech future where companies remain large but regulated, comparable to the technology and communications giants of the middle part of the 20th century. This model did not squelch technological innovation. Today, it could actually aid its growth and promote the sharing of new technologies. 

Take the case of AT&T, a regulated monopoly for seven decades before its ultimate breakup in the early 1980s. In exchange for allowing it to provide universal telephone service, the US government required AT&T to stay out of other communication businesses, first by selling its telegraph subsidiary and later by steering clear of computing. 

Like any for-profit enterprise, AT&T had a hard time sticking to the rules, especially after the computing field took off in the 1940s. One of these violations resulted in a 1956 consent decree under which the US required the telephone giant to license the inventions produced in its industrial research arm, Bell Laboratories, to other companies. One of those products was the transistor. Had AT&T not been forced to share this and related technological breakthroughs with other laboratories and firms, the trajectory of computing would have been dramatically different.

Right now, industrial research and development activities are extraordinarily concentrated once again. Regulators mostly looked the other way over the past two decades as tech firms pursued growth at all costs, and as large companies acquired smaller competitors. Top researchers left academia for high-paying jobs at the tech giants as well, consolidating a huge amount of the field’s brainpower in a few companies. 

More so than at any other time in Silicon Valley’s ferociously entrepreneurial history, it is remarkably difficult for new entrants and their technologies to sustain meaningful market share without being subsumed or squelched by a larger, well-capitalized, market-dominant firm. More of computing’s big ideas are coming from a handful of industrial research labs and, not surprisingly, reflecting the business priorities of a select few large tech companies.

Tech firms may decry government intervention as antithetical to their ability to innovate. But follow the money, and the regulation, and it is clear that the public sector has played a critical role in fueling new computing discoveries—and building new markets around them—from the start. 

3. Location, location, location

Last, think about where the business of computing happens.

The question of where “the next Silicon Valley” might grow has consumed politicians and business strategists around the world for far longer than you might imagine. French president Charles de Gaulle toured the Valley in 1960 to try to unlock its secrets. Many world leaders have followed in the decades since. 

Silicon Somethings have sprung up across many continents, their gleaming research parks and California-style subdivisions designed to lure a globe-trotting workforce and cultivate a new set of tech entrepreneurs. Many have fallen short of their startup dreams, and all have fallen short of the standard set by the original, which has retained an extraordinary ability to generate one blockbuster company after another, through boom and bust. 

While tech startups have begun to appear in a wider variety of places, about three in 10 venture capital firms and close to 60% of available investment dollars remain concentrated in the Bay Area. After more than half a century, it remains the center of computing innovation. 

It does, however, have significant competition. China has been making the kinds of investments in higher education and advanced research that the US government made in the early Cold War, and its technology and internet sectors have produced enormous companies with global reach. 

The specter of Chinese competition has driven bipartisan support for renewed American tech investment, including a potentially massive infusion of public subsidies into the US semiconductor industry. American companies have been losing ground to Asian competitors in the chip market for years. The economy-choking consequences of this became painfully clear when covid-related shutdowns slowed chip imports to a trickle, throttling production of the many consumer goods that rely on semiconductors to function.

As when Japan posed a competitive threat 40 years ago, the American agitation over China runs the risk of slipping into corrosive stereotypes and lightly veiled xenophobia. But it is also true that computing technology reflects the state and society that makes it, whether it be the American military-industrial complex of the late 20th century, the hippie-­influenced West Coast culture of the 1970s, or the communist-capitalist China of today. 

What’s next

Historians like me dislike making predictions. We know how difficult it is to map the future, especially when it comes to technology, and how often past forecasters have gotten things wrong. 

Intensely forward-thinking and impatient with incrementalism, many modern technologists—especially those at the helm of large for-profit enterprises—are the opposite. They disdain politics, and resist getting dragged down by the realities of past and present as they imagine what lies over the horizon. They dream of a new age of quantum computers and artificial general intelligence, where machines do most of the work and much of the thinking. 

They could use a healthy dose of historical thinking. 

Whatever computing innovations will appear in the future, what matters most is how our culture, businesses, and society choose to use them. And those of us who analyze the past also should take some inspiration and direction from the technologists who have imagined what is not yet possible. Together, looking forward and backward, we may yet be able to get where we need to go. 

Margaret O’Mara is a historian at the University of Washington and author of The Code: Silicon Valley and the Remaking of America.

Deep Dive

Computing

Inside the hunt for new physics at the world’s largest particle collider

The Large Hadron Collider hasn’t seen any new particles since the discovery of the Higgs boson in 2012. Here’s what researchers are trying to do about it.

How ASML took over the chipmaking chessboard

MIT Technology Review sat down with outgoing CTO Martin van den Brink to talk about the company’s rise to dominance and the life and death of Moore’s Law.

 

How Wi-Fi sensing became usable tech

After a decade of obscurity, the technology is being used to track people’s movements.

Algorithms are everywhere

Three new books warn against turning into the person the algorithm thinks you are.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.