When Brandon Araki arrived at MIT in 2015 as a master’s candidate in mechanical engineering, he brought along the picobug, a tiny robot that can fly, crawl, and grasp small objects. Before Araki joined Daniela Rus’s Distributed Robotics Lab (DRL), he’d been working with collaborators at several universities on the diminutive autonomous machine, which weighs 30 grams and fits in the palm of his hand. He wasn’t quite sure what he might do next with the picobug, but when his new boss watched it in action, she was smitten. “I want a hundred of them!” Rus said.
This request wasn’t just greedy excitement. Rus, who doubles as the director of the Computer Science and Artificial Intelligence Laboratory (CSAIL), imagines a future packed with autonomous machines capable of flying, driving, performing simple surgeries, and more. “My big dream is to have a world with pervasive machines, pervasive robotics integrated into the fabric of everyday life, helping everyone with physical work and cognitive tasks,” she says.
In the case of Araki’s project, she envisions a fleet of larger autonomous drones zipping around a city delivering packages. But that’s just one of the dozens of systems Rus and her researchers are developing that could affect many areas of everyday life. Recently, they have demonstrated pill-size robots that can move about within the body to repair internal wounds. They have programmed drones to pair with self-driving cars by flying ahead and scanning blind spots. They’ve developed new communication techniques and security models for multi-robot systems. Her lab has produced a novel robotic hand, a 3-D-printed fish that swims like the real thing, a wearable navigation system for blind people, shock-absorbing robotic skins, and more.
Rus, a MacArthur “genius” grant winner in 2002, has gained world recognition for her pioneering work in modular and reconfigurable robots, multi-robot systems, and control algorithms. She is a National Science Foundation Career Award winner and a fellow of the Association for the Advancement of Artificial Intelligence, the IEEE, the AAAS, and the Alfred P. Sloan Foundation, to name just a few of her honors. “A lot of the things she has done look like magic in the beginning because they’re these ingenious ideas no one has thought of,” says roboticist Hod Lipson of Columbia University. “She is ahead of her time.”
And her students learn to share her imaginative but rigorous approach to robotics. Along with other collaborators, she and her students presented 15 papers at the 2017 IEEE International Conference on Robotics and Automation in Singapore, covering all areas of the field, from novel algorithms to new types of hardware. “It is an incredibly productive laboratory, and they really produce diverse projects, all of which are creative and tackle some problem in such a solid way,” says Harvard University computer scientist Radhika Nagpal. “The technical beauty of the mathematics in her work is one thing, but there’s also a great deal of technical beauty in the robots she builds. To achieve both is unusual. To achieve both in as many different fields as she does is completely unusual.”
Body and brain
Rus, 54, was born and raised in communist Romania, where her interest in fantastic technology was stoked at an early age. Her father was a computer scientist, and she was drawn to the gadgetry of Star Trek, the technology of Jules Verne, and an unusual Dutch cartoon, Barbapapa, that featured a family of shape-shifting blobs, including an inventor who built amazing machines. After her family emigrated to the U.S., she studied computer science and mathematics at the University of Iowa and then pursued her PhD under the famous computer science theorist John Hopcroft at Cornell University. (“John said that a lot of the classic algorithms were solved, and that now it was time for the grand applications,” she recalls. “And to him, the grand applications meant robotics.”) Yet as she developed algorithms to help robots grasp and manipulate objects for her doctoral research, she found that the capabilities of robotic systems in the late 1980s and early 1990s did not match the science fiction visions of her childhood. “I had these beautiful algorithms that worked very well in simulation,” she recalls, “but there were no robot fingers that could exert the kind of forces and torques that my algorithms needed.”
After Cornell, Rus became a professor of computer science at Dartmouth College, where she founded the Dartmouth Robotics Lab. She’d already been working on teams of robots acting in concert and expanded her focus to include modular, self-reconfiguring robots that could assume different forms and shapes—much like the characters in the cartoon from her childhood. Still, she ran into the same challenge. These systems could be fully demonstrated only in computer simulations. The actual machines to execute her algorithms had not yet been built.
Rus realized that in order for robots to be truly capable, their brains and bodies needed to be equally advanced. “You need a brain that can control the body, but the body has to be capable of the task you give it,” she says. “So you need to think about the capabilities of the body, then the science and mathematics that give the body the control system it needs.”
In 2003, Rus joined the faculty at MIT, bringing her lab with her and renaming it the Distributed Robotics Lab to align with her vision: a future of pervasive robots. She became co-director of CSAIL’s Center for Robotics in 2005 and CSAIL’s associate director three years later; by 2012, she was considering taking a sabbatical and launching a startup. Her passion for research had not faded—she was simply looking for something different. Then she learned that the director position was opening at CSAIL. “It’s such an extraordinary organization and it has played such a big role in shaping the future,” she says. “To be able to help shape how CSAIL moves forward, to help make it more impactful—this was kind of like the American dream.”
The sabbatical could wait.
CSAIL was already well established as the world’s top research center for computer science when Rus—the first woman to lead the lab—took over in 2012. Today, CSAIL counts seven MacArthur winners, eight Turing Award recipients, and a knight (Tim Berners-Lee). MIT’s largest research lab, it houses 115 principal investigators (PIs), hundreds of scientists and students, and more than 800 research projects. “There are a lot of ideas here, and each one of our PIs is a big dreamer,” Rus says. “We are kind of a union of dreams, and my role is to make sure we have the environment to cultivate these big dreams and ideas.”
To do so, Rus works to maintain a culture that supports growth in everyone at CSAIL, from administrative staff to faculty luminaries. She tries to maintain a sense of community through the usual methods, including regular meetings, social gatherings, and symposia. But her efforts on this front also range from the intensely personal—a roboticist spoke of how she helped him through a divorce—to the whimsical. Once, a piano that Rus had bought for her daughter was mistakenly shipped to the lab instead of her home, and people began sitting down during the day to play. “During those two weeks, I learned that many of my students were actually very talented artists,” she recalls. “One day, I heard one of the students playing Chopin, and it was so beautiful.” When the piano was removed, she bought the lab a keyboard, and the impromptu concerts continued.
The other two pillars of her strategy for leading CSAIL—resources and ideas—are directly related. Rus identifies the problems in computing that can have the greatest impact, and then pursues partnerships to secure the funding necessary to tackle them. A big-data initiative rolled out just as she became director in 2012, and she has since launched four more major industry- or government-sponsored initiatives focused on cybersecurity, autonomous vehicles, machine learning, and health care.
For example, Rus forged a $25 million autonomous-vehicle program with Toyota, forming the Toyota-CSAIL Joint Research Center. Instead of completely driverless cars like those Google or Uber are attempting to build, she envisions vehicles equipped with laser range-finders and other advanced sensors that could be used to help people drive safely in crowded cities or bad weather. The car would assume complete control only if you were approaching a turn dangerously fast, for example, or switching lanes when another car was already there.
This venture alone includes 17 different projects engaging various CSAIL labs, including her own DRL and groups that focus on computer vision, machine learning, and sensor development. “I’m very interested in projects that cut across multiple fields of computing and are bigger than what each individual researcher can do,” she explains. As director, Rus also sees herself as a spokesperson for the laboratory, a kind of scientist storyteller who literally travels the world regaling researchers at conferences and meetings with tales of CSAIL’s latest feats. Nagpal says Rus’s talks inspire her own group at Harvard, and she suspects that she has the same effect on others. “She’s the person who makes you feel like this should be fun,” Nagpal says. “She creates an atmosphere in which people are more likely to be creative and try something that might not a priori make sense, but would be really wild and cool if it were possible.”
Rus says the work in the DRL is intended to advance the science of autonomy. Projects generally follow the same broad methodology. Most start by identifying and understanding a problem and dreaming up the robots that could solve it (whether they already exist or need to be built from scratch). Then the researchers develop algorithms to control the machines, run them in simulation, and finally test them in the real world.
The results often challenge the standard definition of robots. In 2016, Rus, mechanical engineering grad student Steven Guitron, CSAIL postdoc Shuguang Li, and colleagues from other institutions introduced an origami-inspired ingestible robot. To get it to work, they had to design a robotic body that could be compressed into a capsule the size of a pill, then unfold and carry out tasks when the capsule dissolved. The miniature machine has not been tested in vivo yet, but Rus says that future versions could be microsurgeons, performing surgeries without incisions or physical pain.
“She says we need to think about the technology we need to create 10 years from now, how to think about problems that are important not just for the next iteration, but the next hundred iterations of a technology,” says postdoc Cristian-Ioan Vasile.
The origami robots link to her previous work on self-reconfiguring machines, but they also reflect a larger effort to reimagine the robot body. Rus launched the Soft Robotics Group within her lab because of her conviction that rigid, stiff machines have too many limitations. One of the group’s first members, Robert Katzschmann, made a 3-D-printed fish with flexible actuators, or artificial muscles. Later, while working with other members to improve a robot’s ability to grasp fragile objects—similar to the task Rus tried to tackle in her PhD work—Katzschmann remodeled the actuators he’d developed for the fish to help engineer a new kind of hand for humanoid robots like Baxter. (Baxter is the brainchild of former CSAIL director Rodney Brooks and his company, Rethink Robotics.) Unlike the rigid robotic forms of the past, Katzschmann’s soft, three-fingered hand can bend and “feel,” allowing it to pick up a wider range of objects—and identify them without using vision algorithms.
Rus also works to enhance robots’ ability to reason and make good decisions. Vasile is developing algorithms that make sure autonomous cars always operate safely, avoiding collisions with other vehicles and pedestrians, and also strike the right balance between satisfying the rules of the road and reaching their destination efficiently. (For example, the default mode of staying in the right lane could be overridden by the need to get around a construction site.) Still, a smart car can only make decisions based on the data it can access. The vehicle won’t be able to see around corners in a tight parking garage, for example. So another DRL group paired a robotic car with an autonomous drone. As the car identifies blind spots, the drone flies ahead, scans these areas for potential hazards, and then sends the video back to the vehicle for processing.
Similarly, for Araki to build the robot swarm Rus asked for, he had to rethink both the physical robots and their control systems. He swapped the legs of the picobug for more reliable, easier-to-control wheels. Next, he had to figure out how his robots would find their way around a simulated urban environment. He’d never designed this kind of control system—his background is in mechanical engineering—but Rus expects her students to be versatile. So he adapted an algorithm designed to help swarms of robots collectively plan fast and collision-free routes. Since his robots could fly, Araki modified the algorithm so it would extend to vehicles moving above the pavement, too, and then optimized it for efficiency so the drones wouldn’t run out of battery power quickly. So far, he has built eight of them, and with the help of Rus and other lab members, he demonstrated that the flying cars can zip around a nine-square-meter model city without colliding.
Communication is the other key to realizing Rus’s robot-filled future. If swarms of capable machines are going to be operating in the real world, whether they’re flying, driving, or rolling around a home, they are going to need to communicate more effectively with one another and with humans. With Rus’s help, DRL computer scientist Stephanie Gil developed algorithms that enable robots to sense variations in wireless signal strength, estimate where the signals might be better, and shift to those spots to improve their ability to communicate. This could be critical if you were to send a fleet of robots to the site of a natural disaster, Gil explains, since it would allow multiple machines to efficiently scour the space and quickly share critical information with each other and any human officials.
In learning more about how wireless signals propagate, Gil also found a way to guess the likely source of a given signal and whether it originated from a known entity, such as another robot in the room, or an unidentified party. “We can identify if there’s a spoofer or malicious agent adding information into the picture that’s not actually valid,” she says. If Rus’s vision comes to fruition, this could prove incredibly important. A hacker taking over your computer is threatening enough. Now imagine that individual grabbing the digital wheel of your self-driving car.
While they might seem like disparate projects to the outsider, the links between areas like robotic fish, security, autonomous cars, and drones are clear to Rus and her students. “You might think these things are disjointed, but there’s a bigger picture,” says Katzschmann. “If you want to make robots that can do things that everyone can make use of, you need to have innovations in a lot of different fields. Daniela has the vision that will eventually bring all these things together.”
That vision is what motivates her to finish a TEDx talk late one night and still meet with her lab members the next morning, or fly to China for 24 hours to discuss a potential research partnership, then resume her work the next day. There’s no room for rest when you’re building the future. “Twenty years ago, computation was reserved for the expert few, and now look at where we are today,” she says. “Computation has truly revolutionized how we work, how we live, and how we play, and I would like to see the same kind of impact from robots.”
Here’s how a Twitter engineer says it will break in the coming weeks
One insider says the company’s current staffing isn’t able to sustain the platform.
Technology that lets us “speak” to our dead relatives has arrived. Are we ready?
Digital clones of the people we love could forever change how we grieve.
How to befriend a crow
I watched a bunch of crows on TikTok and now I'm trying to connect with some local birds.
Starlink signals can be reverse-engineered to work like GPS—whether SpaceX likes it or not
Elon said no thanks to using his mega-constellation for navigation. Researchers went ahead anyway.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.