Skip to Content
MIT News magazine

Opera, Remixed

My UROP in the Opera of the Future lab
December 20, 2011

I’m sitting in the cavernous Harris Theater in Chicago when an older man with a beard steps up to a microphone. “This project is something that will be remembered,” he says. “Ten or 100 years from now the history books will list this as the turning point, a great shift for the world of opera.” 

Although I’m a pop-music-loving college student, not an opera buff, I couldn’t agree more. And I am here—950 miles from MIT on a Monday afternoon, listening to discussions about arias and orchestration—because we’re gathered to talk about Death and the Powers, which is anything but a normal opera.

Let’s rewind four years. I arrived at MIT as a starry-eyed freshman looking for things to do and found Professor Tod Machover’s research group, Opera of the Future. Hey, it’s not my favorite kind of music, but a brief glimpse at his projects was enough to intrigue me. Tod, who has two Juilliard degrees in music composition and loves studio masterpieces like the Beatles’ Sgt. Pepper’s Lonely Hearts Club Band, has been at the Media Lab since before I was born, developing instruments that allow performers to re-create complex studio performances live. 

I have a passion for entertainment technology—not game consoles or televisions but large-scale cranes, state-of-the-art lighting, video, and audio, and any other tools of artistic expression that can change the environment for a lot of people sitting in the same place. So it was with excitement and trepidation that I began to work for Tod as a UROP.

It started out simply. I built a small recording studio out of gear I found around the lab. One day, I stumbled across a paper describing an interesting method for reproducing 3-D audio: a single recording could be played back on any number of speakers, and the more speakers were used, the more precisely all the parts of the recording seemed to be positioned in space around the listener. Even basic systems I built using a few speakers sounded much more compelling than high-end home theater and even cinema systems. I mentioned to Tod that with some modifications, perhaps the same ideas could be applied to a live performance in a bigger venue. Little did I know that he had a gig already lined up. That fall I found myself at the Sage Gateshead, a 1,700-seat concert hall near Newcastle, England, doing sound mixing for the world premiere of Tod’s opera Skellig.

Since then it’s been a blur. I’ve been to New York and London (several times), San Remo, Italy (for the pizza!), Monte Carlo, and Detroit. And I’ve worked with Hollywood designers, Broadway directors, and incredible crews and technicians to design and build high-resolution surround-sound systems in each place. It’s sometimes stressful. My job is to guarantee that everything will go as planned, which is easier said than done, given that our systems are made primarily with custom hardware and software. But when 8,500 people have purchased tickets, failure is not an option. 

I’m also the ears of Opera of the Future productions, mixing live performances and weaving together layers of sonic texture. I memorize each piece so I know exactly how to move the faders controlling audio inputs. Yes, I’m that guy at the back mouthing the words along with the singers on stage. It helps me concentrate.

Our latest production, Death and the Powers, is a robot opera with 400 lighting instruments, 143 speakers, 43 computers, 12 robots, four miles of cable, and three video walls that move autonomously like self-aware SUVs, each weighing more than two tons. I take pride in working with the Opera of the Future team to design reliable systems that encompass a whole theater from the stage to the seats. Every part of the show is connected to the performers on stage: lighting, video, audio, and robotics. The result is, we hope, technology that brings the audience closer to the performers.

Judging from reactions and reviews, we seem to be doing a good job. A small crowd greets me after each show to express awe at the performance—and often to ask what I actually do. When I explain that I balance the volume of 350 sound inputs, I garner almost as much respect as the robot operators. That’s okay with me; I’ll take audio over robots any day. 

Ben Bloomberg ‘11 is finishing his undergraduate work this spring and plans to get a master’s degree at the Media Lab, continuing his work on low-cost surround-sound hardware.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.