Skip to Content
Smart cities

The smart city is a perpetually unrealized utopia

Urban technologies were meant to connect, protect, and enhance the lives of citizens. What happened?

June 24, 2022
New Babylon artwork
Like nearly every imagined future utopia, New Babylon exists only in architectural drawings, sketches, maps, collages, and experimental filmsFONDATION CONSTANT, ARTISTS RIGHTS SOCIETY, NEW YORK via PICTORIGHT AMSTERDAM

In 1959, in a short essay called “The Great Game to Come,” a little-known Dutch visual artist named Constant Nieuwenhuys described a new utopian city—one that he was soon to dub “New Babylon.” “The technical inventions that humanity has at its disposal today,” he presciently stated, “will play a major role in the construction of the ambiance-cities of the future.”

Like nearly every imagined future utopia, New Babylon was never built. It was manifested only in architectural drawings, sketches, maps, collages, and experimental films. Its creator, generally known as Constant, envisioned his city as a complex network where artificial and natural spaces would be linked together by communication infrastructures; “recourse to a computer” would be necessary to resolve such a complex organizational problem. But New Babylon was to be something even more radical: a place where new technologies would replace the drudgery of labor by automatic processes, enabling the city’s inhabitants to experience a “nomadic life of creative play.”

Today, Constant’s pronouncement seems prophetic. No doubt computers would also have been needed to achieve his visionary concept of an environment in which “each person can at any moment, in any place, alter the ambiance by adjusting the sound volume, the brightness of the light, the olfactive ambiance or the temperature.” Above all, electronic technologies would enable complete transformations of sound, light, and the organization of space in New Babylon. These transformations would be accomplished by what Constant called “the most sophisticated behind-the-scenes automation,” while electronics themselves “would be part of the visible scenery.” Spaces in New Babylon would somehow need to be “aware” of the activities taking place in them so that the environment could know when to change its appearance and behavior. 

Constant was soon to achieve international renown as one of founding members of the Situationist International (1957–1972)—a group of artists, writers, and philosophers who aimed to apply Marxism to contemporary urban society. Like many of his SI compatriots, Constant viewed the post-WWII city as a site for both critique and intervention. He and a Situationist collaborator, the cultural critic Guy Debord, declared as much in setting forth a concept they dubbed “Unitary Urbanism,” which considered the city not as an agglomeration of faceless architecture and bureaucratic processes but as a set of creative social practices.

New Babylon took shape during the two-year period that Constant was a member of the SI. It was not so much an architectural planning project as it was “a way of thinking, of imagining, of looking on things and on life.” Although echoing other technology-charged 1960s utopian city visions such as Archigram’s “Walking City” or the performative “Villa Rosa–Pneumatic Living Unit” from the Austrian avant-garde collective Coop Himmelb(l)au, New Babylon began to gel in, of all places, the countryside. In 1959, the artist participated in an experimental-urbanism workshop in the Italian town of Alba at the base of the Piedmont Mountains. Sympathetic to the presence of nomadic Roma camped out by the Tamaro River, he began working on a concept to create a “permanent encampment” for the migrants “where under one roof, with the aid of moveable elements, a shared temporary, constantly remodeled living area is built.”

New Babylon would gestate in Constant’s mind for two decades. In his vision, land would be collectively owned, social systems would be hyper-connected, and automation would create a life of leisure for its citizens. To achieve a new “social organization of the city,” Constant imagined a vast hierarchy of local sites (what he called “sectors”) connected globally (“networks”). Groupings of interlinked platforms were envisioned as being completely transformable so as to create dynamic relations between inhabitants (“New Babylonians”) and their surroundings. With interwoven levels of transport networks and spaces all linked by communications infrastructure, New Babylon defied traditional cartography. Clearly the artist knew, however, that running such a complex, interconnected system would require help from the emerging technologies of computational management and control. Though he had neither the ability to construct New Babylon nor an interest in actually doing so, his concept seemed like an idea whose time would come.

Rise of the smart city

In 1974, the same year that Constant ceased working on New Babylon, a little-known report was published by the Los Angeles Community Analysis Bureau (CAB), titled “The State of the City: A Cluster Analysis of Los Angeles.” The report offered the typical stuff of urban research—statistical analysis, demographic data, and housing assessments. But what was not apparent was how the CAB had gathered the data. 

While urban theorists somewhat myopically trace the concept of the “smart city” back to the 1990s, when IBM arguably first coined the term, the CAB’s research represents one of the earliest large-scale efforts to model the urban environment through “big data.” Utilizing a combination of computerized data gathering and storage, statistical cluster analysis techniques, aerial-based color infrared photography (what we today call remote sensing), and direct “on the ground” (i.e., driving around the city) validation of the aerial images, the CAB’s analysis was decidedly different from previous attempts. The CAB partitioned the city into clusters representing social-geographic features that sound straight out of today’s social media playbook: “LA singles,” “the urban poor,” “1950s-styled suburbs.” What the cluster analysis truly revealed were correlations between socioeconomic forces that could be used as predictors for which neighborhoods were falling into poverty and “urban blight.”

Though innovative for the time, the CAB’s harnessing of punch cards and computer-based databases was not an isolated endeavor. It was part of a much larger set of postwar experiments focused on reimagining the urban through computational processes. The urban theorist Kevin Lynch’s 1960 Image of the City spurred years of research into cognitive science on how we map typological elements in urban space (paths, edges, nodes, districts, and landmarks). Cyberneticians such as Jay Forrester at MIT sought to apply complex systems dynamics by way of computer simulations to understand the feedback loops within urban development, involving everything from population and housing to the influence of industry on growth. With Forrester, Lynch, and others, the foundations for smart cities were being laid, just as sensing and computing were entering into the public consciousness.

The visions of the sensor-studded battlefield and the instrumented city both seem to lack a central ingredient: human bodies.

The contemporary vision of the smart city is by now well known. It is, in the words of IBM, “one of instrumentation, interconnectedness, and intelligence.” “Instrumentation” refers to sensor technologies, while “interconnectedness” describes the integration of sensor data into computational platforms “that allow the communication of such information among various city services.” A smart city is only as good as the imagined intelligence that it either produces or extracts. The larger question, however, is what role human intelligence has in the network of “complex analytics, modeling, optimization, visualization services, and last but certainly not least, AI” that IBM announced. The company actually trademarked the term “smarter cities” in November 2011, underlining the reality that such cities would no longer fully belong to those who inhabited them.

What is interesting about both early and current visions of urban sensing networks and the use that could be made of the data they produced is how close to and yet how far away they are from Constant’s concept of what such technologies would bring about. New Babylon’s technological imagery was a vision of a smart city not marked, like IBM’s, by large-scale data extraction to increase revenue streams through everything from parking and shopping to health care and utility monitoring. New Babylon was unequivocally anticapitalist; it was formed by the belief that pervasive and aware technologies would somehow, someday, release us from the drudgery of labor.

War and sensors

The apocalyptic news broadcast from Mariupol, Kharkiv, Izium, Kherson, and Kyiv since February 2022 seems remote from the smart urbanism of IBM. After all, smart sensors and sophisticated machine-learning algorithms are no match for the brute force of the unguided “dumb bombs” raining down on Ukrainian urban centers. But the horrific images from these smoldering cities should also remind us that historically, these very sensor networks and systems themselves derive from the context of war.

Unbeknownst to Constant, the very “ambient” technologies he imagined to enable the new playful citywere actually emerging in the same period his vision was taking shape—from Cold War–fueled research at the US Department of Defense. This work reached its height during the Vietnam War, when in an effort to stop supply chains flowing from north to south along the Ho Chi Minh Trail, the US Army dropped some 20,000 battery-powered wireless acoustic sensors, advancing General William Westmoreland’s vision of “near 24-hour real- or near-real-time surveillance of all types.” In fact, what the US Defense Advanced Research Projects Agency (DARPA) would later call “network-­centric warfare” was the result of multibillion-dollar funding at MIT and Carnegie Mellon, among other elite US universities, to support research into developing distributed wireless sensor networks—the very technologies now powering “greater lethality” for the military’s smartest technology.

satellite image of Ukrainian city
Networks of smart sensors are no match for the brute force of unguided “dumb bombs” like the ones raining down on Ukrainian urban centers.
MAXAR TECHNOLOGIES

It is well known that technologies originally developed by DARPA, the storied agency responsible for “catalyzing the development of technologies that maintain and advance the capabilities and technical superiority of the US military” (as a congressional report put it), have been successfully repurposed for civilian use. ARPANET eventually became the Internet, while technologies such as Siri, dynamic random-access memory (DRAM), and the micro hard drive are by now features of everyday life. What is less known is that DARPA-funded technologies have also ended up in the smart city: GPS, mesh networks for smart lighting systems and energy grids, and chemical, biological, and radiological sensors, including genetically reengineered plants that can detect threats. This link between smart cities and military research is highly active today. For example, a recent DARPA research program called CASCADE (Complex Adaptive System Composition and Design Environment) explicitly compares “manned and unmanned aircraft,” which “share data and resources in real time” thanks to connections over wireless networks, to the “critical infrastructure systems” of smart cities—“water, power, transportation, communications, and cyber.” Both, it notes, apply the mathematical techniques of complex dynamic systems. A DARPA tweet puts this link more provocatively: “What do smart cities and air warfare have in common? The need for complex, adaptive networks.”

Both these visions—the sensor-­studded battlefield and the instrumented, interconnected, intelligent city enabled by the technologies of distributed sensing and massive data mining—seem to lack a central ingredient: human bodies, which are always the first things to be sacrificed, whether on the battlefield or in the data extraction machinery of smart technologies. 

Spaces and environments outfitted with sensor networks can now perceive environmental changes—light, temperature, humidity, sound, or motion—that move over and through a space. In this sense the networks are something akin to bodies, because they are aware of the changing environmental conditions around them—measuring, making distinctions, and reacting to these changes. But what of actual people? Is there another role for us in the smart city apart from serving as convenient repositories of data? In his 1980 book Practice of Everyday Life, the Jesuit social historian Michel de Certeau suggested that resistance to the “celestial eye” of power from above must be met by the force of “ordinary practitioners of the city” who live “down below.”

When we assume that data is more important than the people who created it, we reduce the scope and potential of what diverse human bodies can bring to the “smart city” of the present and future. But the real “smart” city consists not only of commodity flows and information networks generating revenue streams for the likes of Cisco or Amazon. The smartness comes from the diverse human bodies of different genders, cultures, and classes whose rich, complex, and even fragile identities ultimately make the city what it is.

Chris Salter is an artist and professor of immersive arts at the Zurich University of the Arts. His newest book, Sensing Machines: How Sensors Shape Our Everyday Life, has just been published by MIT Press.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.