Skip to Content
Uncategorized

Dances with Machines

Artist David Rokeby builds machines that watch us, make music with us, speak to us and free-associate on our behalf.

The movements of the lanky man on the videotape mesh perfectly with the undulating rhythms and cascading tones that accompany his dance. As the music swells, his gestures grow pronounced and emphatic; as the sound dwindles to the pulse of a synthesized bass or the flutter of an electronic clarinet, his motions diminish to the twitch of a hand or the slow sweep of an arm. The choreographer, it seems, must have worked closely with the dancer and the composer to make such a seamless piece. The reality is more complex: This dancer is, in fact, also choreographer and composer, choosing his moves on the fly while simultaneously making the music to match in an intimate collaboration with a video camera and a homemade computer system.

Sprawled shoeless on the living room floor in his Toronto home, 38-year-old David Rokeby watches the 28-year-old version of himself on a small TV set. Though his worn jeans, wire-rimmed glasses and only slightly scruffy hair make him look like the math professor his parents wanted him to be, Rokeby has instead become an internationally known interactive artist-his multimedia installations invite gallery goers and exhibition attendees to become active participants in the artistic process.

In language that shifts easily between the professorial and the poetic, Rokeby explains both the technology and the artistic intentions behind his work. In many ways, his career sounds like that of a researcher. Rokeby thinks of each of his installations as an experiment; observing the hundreds of thousands of people who have participated with his pieces has given him an invaluable opportunity to learn about humans, machines and the very complicated relationships between them.

Through these artistic explorations, Rokeby has begun to understand how people’s interactions with computers change as technogadgetry becomes more and more common. And he has uncovered some ways that machines can subtly distort human perceptions. After years of investigating such ideas, Rokeby worries that our increasing interaction with the Internet and “intelligent” technologies might cause us to devalue some of the attributes that make us human. So while others work toward a transparent interface between person and machine, Rokeby aims to expose the quirks, foibles and rough edges of that relationship. “Because I’ve programmed a lot, because I’ve built computers, I know what it’s like to write a program and then watch people deal with it, and watch how my decisions change people’s experiences,” says Rokeby. “For me, it’s important that I somehow articulate the importance of that act.”

Rokeby played the videotape of his dance on a sunny January afternoon to demonstrate his best-known project: Very Nervous System. The name is an umbrella term for an ongoing series of installations-the project’s technological roots date back to some fiddling around with light sensors and a synthesizer that Rokeby did in the early 1980s. Over the years, Rokeby has used the technology behind Very Nervous System not only in his artistic endeavors, but also to support them; reduced to its initials, VNS is an image-processing device he builds and sells to performers, composers, researchers and other artists.

What VNS does, essentially, is translate the motion captured in a live video image into a digital signal. That signal can, via a Macintosh computer, drive electronic equipment such as synthesizers, video players and lights-all in real time. In a typical Very Nervous System installation, a body moving in the camera’s field of vision becomes an integral part of the work, triggering and modulating sounds or other effects.

Rokeby develops software and hardware for projects such as Very Nervous System with little outside help, and no formal technical training. As a teenager growing up in southern Ontario in the 1970s, he taught himself programming in order to indulge a fascination with electronic music and computer graphics. At 19, with an offer on the table for a lucrative but uninspiring job in data processing, Rokeby instead embarked on a “five-year plan”-he would focus on the things that interested him and avoid those that “smacked of career.” If it didn’t work out, he figured, he could always go back to school and get a computer science degree.

After a stint at the Ontario College of Art, almost five years to the day after he hatched his plan, Rokeby received an invitation to show his work at the Venice Biennale, arguably the world’s premier art show. The list of his artistic honors has grown steadily since.

Rokeby isn’t the only artist exploring the gray area between the body, the mind and the computer (see sidebar “Virtual Plants”), but he began doing this kind of interactive work long before most of the other artists currently on the scene, says Finnish media scholar Erkki Huhtamo, a visiting professor in the department of design at the University of California, Los Angeles. What’s more, Huhtamo says, Rokeby is one of few to have constructed his own technological tools. “He’s wonderfully capable of doing that,” says Huhtamo, “but on the other side he has applied those tools for various artworks-a career that combines these two sides meaningfully and interestingly is rather rare.”

So in Watch, Rokeby created an overtly voyeuristic experience. Video projectors shine two images side-by-side, each a processed version of a surveillance camera’s view of a nearby public space. In Very Nervous System, the computer extracts motion from a video signal by comparing one frame with the last and determining which pixels have changed, but that whole procedure is invisible to the viewer. The image-processing techniques used in Watch are a dissection of VNS’s internal workings. On one side only the things that are moving show up, white ghosts gliding through a black void; the other side shows only what’s still, a seemingly normal but frozen black-and-white video image.

To these images, Rokeby adds a soundtrack: The occasional noise of a camera shutter or electronic beeping interrupts soft hypnotic sounds of breathing, a heartbeat and a ticking clock. It’s a reminder, Rokeby says, that there might be something wrong with spying on people in this way.

Watch also serves as a reminder of how different the world can look when seen through varying technological lenses. In the early days of developing the piece, Rokeby aimed the camera out his studio window at a busy intersection. The two different video filters-one catching motion, the other stasis-became socioeconomic filters: In one image, members of a vibrant crowd moved swiftly about their business, in the other, panhandlers appeared to be sitting quietly alone on a deserted sidewalk.

Rokeby again draws from art a lesson about the impact of technology on our perceptions. The image-filtering techniques he employs in Watch are very similar to those used to compress video for storage or transmission. (Programmers save digital space by recording or sending only the changing pixels in successive frames of a moving image.) The more we use such techniques in daily life, he says, the more we wear inherently biased lenses. Rokeby says he is particularly concerned by the large number of design decisions being made “by programmers in startup companies working on intense deadlines, with very little experience of philosophy and politics.”

Though the insights Rokeby has gained through his art may put him in a better position to make such programming decisions, he has no desire to tie himself to his own startup company. He builds and sells only a few VNS units a year, though many more people would like to get their hands on one, according to Todd Winkler, a music professor at Brown University. “In the computer music world, his system is very well known and people talk about it, want to learn about it all the time,” says Winkler, who has used his own VNS setup for more than three years in installations, performances and demonstrations. Still, Winkler understands Rokeby’s decision to focus primarily on art rather than commerce. “Getting into the business of making little metal boxes that everybody in the world wants could really consume you completely,” Winkler says.

On the contrary, what is consuming Rokeby these days is his latest project, The Giver of Names. It’s a concept that came to the artist almost instantaneously on the day after his birthday in 1990. “The idea was there would be a computer and objects and you could present the objects to the computer and it would talk about them,” he recounts. Realizing this seemingly straightforward notion, however, has taken the better part of the decade.

Part of the motivation behind The Giver of Names was what Rokeby, perhaps presciently, saw as a shift in the interplay between people and technology. As he wrote in an e-mail quoted in the catalogue for the 1998 premiere of The Giver of Names, in the 1980s it was the body that was “most challenged by the computer….In the ’90s it seems to be the notions of intelligence, and consciousness.”

Rokeby worries that as we grow accustomed to such phenomena as intelligent agents on the Internet and computerized phone systems, we may devalue certain human attributes. To talk to that computerized receptionist, for example, we often have to exaggerate and mechanize our speech-the change in enunciation is a “subtle dumbing-down process.” So rather than trying to make The Giver of Names a flawless facsimile of human thought, Rokeby wanted to leave it rough, exposing the “quirky textures” of a strictly mechanical intelligence rather than using clever programming to paper them over.

In action, The Giver of Names is quirky indeed. The installation space is spare: A video camera aims at a black pedestal around which a variety of objects are strewn. Off to one side is a Macintosh G3. Visitors can select objects from the pile, or items they’ve brought with them, and arrange them on the pedestal; the computer captures an image and processes it, identifying colors, outlines and shapes. The system then begins a mechanical version of free-association, pulling up words that are somehow connected to the details culled from the image. The Giver of Names’ “state of mind” in this process is a relational database of 100,000 objects, words and ideas.

An object on the pedestal, Rokeby explains, “is like a pebble dropped in a pond of memory, and the associations are like ripples moving away from the initial object and exciting or stimulating different parts of the memory.” The words most “stimulated” in this process become the palette from which the computer chooses in forming sentences that appear on the computer screen. At the same time, male and female voices fill the installation space as they utter the words.

Presented with a soda bottle and an apple, for example, the system might pick up on the red of the apple and the shape of the bottle-these would probably stimulate the word “wine,” among others, says Rokeby. “As for the sentence, it could be anything from ‘The wine spilled’ to something completely off the wall like ‘Red aliens from inner cities flopped sumptuously on the wine-stained sofa.’”

Early on, The Giver of Names tended to talk about war. The system’s fixation on generals and grenades prompted Rokeby to consider the fact that many of the databases he used were developed for military-sponsored artificial intelligence and natural-language processing research. “It’s kind of interesting,” he says, that the tools “used to train artificial intelligences about language will inevitably have a strong defense bias, because the best resources right now were funded by the Defense Department.”

Rokeby is the first to admit that such specific lessons aren’t likely to be obvious in his artworks, that most people won’t listen to The Giver of Names talking about a piece of fruit and say, “Gee, I should really think about the effects of military funding on the future of artificial intelligence.” But by seeing ourselves in collusion with and in contrast to the mechanical perceiving, thinking and speaking systems that Rokeby builds, we can all begin to think about, as he puts it, “how much of what we do is basically mechanical and how much of what we do does imply something richer and more complicated.” And Rokeby takes great satisfaction in the unique intensity with which interactive art allows him to communicate such ideas. Not everyone gets the point of each installation, he says, “but when they get it, boy do they get it.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.