Shivani Torres, 30, has always considered herself a maker. That hands-on approach led her to invent a process that can bore through rock using heat instead of physical methods like blasting or drilling. As cofounder and chief product officer at Petra, Torres led the research and development of a robot that harnesses the heat produced by a jet engine to pulverize rock.
The idea of noncontact drilling arose in the 1970s, when scientists began experimenting with nuclear reactions and plasma as alternatives to conventional drills. Neither idea proved viable. As a student at Stanford, Torres worked in the university’s metal shop, where she saw the effectiveness of small-scale torches powered by jet engines firsthand.
At Petra, she hypothesized that the same process could work on rock, which expands and eventually shatters when subjected to high heat (Torres likens this process to putting a cold glass dish into a very hot oven). With a jet engine and a custom afterburner she designed, Torres and her team created a highly efficient cutting torch that can bore through even the toughest rock with minimal impact to whatever’s around it.
Compared with conventional drilling, says Torres, “it’s safer, it’s more affordable, and because it can be fueled by biodiesel, it’s also more sustainable.”
Petra intends to market this technology as an integral part of the movement to bury our nation’s outdated grid system underground, where it will be less vulnerable to natural disasters and worsening storms.
When it comes to the rare earth elements that we’ll need to build more electric vehicles and electrify our transportation infrastructure, Earth’s resources are extremely limited. And extracting them takes a lot of energy and exacts a high environmental cost.
Some have begun looking beyond our home planet in the hopes of finding more of these elements. There’s even talk of setting up heavy industry in space. But doing any of that will require sending more astronauts to distant planets and asteroids. And the cost of rocket propellant for deep space travel is currently prohibitively expensive.
Forrest Meyen, 34, has been passionate all his life about making the dream of more frequent crewed space missions a reality. A cofounder of the space tech company Lunar Outpost, he earned his doctoral degree from MIT in aeronautics and astronautics. Now his work could help make space exploration more affordable and give the space mining industry a boost.
For the past eight years, he’s been part of the team working on MOXIE, a device about the size of a toaster. MOXIE traveled to Mars aboard NASA’s Perseverance rover in 2021 and successfully converted samples of the Red Planet’s CO2-dominated atmosphere into oxygen.
It was the first time a robotic system had ever harnessed the natural resources of another planet for potential human use, an important step toward human-led Mars missions.
The same system could someday be used to create rocket propellant for missions returning from Mars, which could save NASA billions of dollars. “It makes travel to Mars and back feasible,” says Meyen, though there are still many other challenges to be worked out—such as how to protect astronauts from the sun’s powerful radiation, and whether it’s possible to grow crops in the planet’s soil.
Meyen is now concentrating on leading the first moon rover mission with NASA to the lunar south pole, which has never been explored before. That is scheduled to take place later this year. He and his team hope the rover will detect water, which could also be used to create rocket propellant and collect lunar soil samples.
Neural networks often make decisions that even the designers of the systems don’t fully understand. This “explainability” problem makes it harder to fix flaws such as biased or inaccurate results. Daniel Omeiza, 31, is working to solve the explainability problem in self-driving cars; he has developed techniques that can provide visual and language-based explanations for engineers and ordinary human drivers alike about why a car reacts in a specific way.
His most recent work automatically generates commentary about a car’s actions—including auditory explanations, driving instructions, and a visual graph—by using a decision-tree technique that can parse data from the car’s perception and decision planning systems. Omeiza’s model, which is flexible enough to work with different autonomous cars, can either use the car’s previously recorded data or process information about the actions of a vehicle during operation to generate likely explanations. He is currently working on integrating traffic laws into his system. Omeiza is motivated by a desire to improve the safety of self-driving cars and help AI engineers code systems more efficiently. He hopes that his model increases consumer trust in AI technology. “Deep-learning models sometimes alienate people who need explanations to trust the system,” he says.
A computer science researcher at New York University, Lerrel Pinto, 31, wants to see robots in the home that do a lot more than vacuum: “How do we actually create robots that can be a more integral part of our lives, doing chores, doing elder care or rehabilitation—you know, just being there when we need them?”
The problem is that training multiskilled robots requires lots of data. Pinto’s solution is to find novel ways to collect that data—in particular, getting robots to collect it as they learn, an approach called self-supervised learning (a technique also championed by Meta’s chief AI scientist and Pinto’s NYU colleague Yann LeCun, among others).
The idea of a household robot that can make coffee or wash dishes is decades old. But such machines remain the stuff of science fiction. Recent leaps forward in other areas of AI, especially large language models, made use of enormous data sets scraped from the internet. You can’t do that with robots, says Pinto.
Pinto hit one of his first milestones back in 2016, when he created the world’s largest robotics data set at the time by getting robots to create and label their own training data and running them 24/7 without human supervision.
He and his colleagues have since developed learning algorithms that allow a robot to improve as it fails. A robot arm might fail many times to grasp an object, but the data from those attempts can be used to train a model that succeeds. The team has demonstrated this approach with both a robot arm and a drone, turning each dropped object or collision into a hard-won lesson.
It may seem like the stuff of science fiction: robots made from living tissue. But Victoria Webster-Wood, 33, has built bots from a range of biological materials, with the aim of making robotics more environmentally friendly.
Although robots are now deployed in a range of natural environments—to monitor oceans, for example, or help harvest crops—they’re often made from hazardous metals. Past attempts to use softer, biodegradable materials often fell short: a major challenge is getting soft robotic legs and arms to attach to harder bodies. “At that interface, you’re likely to get tears or defects,” says Webster-Wood, a professor of mechanical engineering at Carnegie Mellon University. “The robot can fall apart.”
To combat this, Webster-Wood took inspiration from tendons, the tissues that attach muscle to bone. Using a novel 3D-print head, her team built biologically derived actuators—the components that make a robot move—by embedding stronger fibers such as collagen into soft threads made from materials like seaweed. These tendon-like actuators can then be attached to soft robot limbs and rigid bodies, with less chance of mechanical breakdown.
Webster-Wood’s influence in the emerging field of biohybrid robotics extends well beyond this innovation: her many other feats include building bots with legs made from the muscle of sea slugs and modeling that animal’s nervous system to study how robots derived from living materials might operate without external controls. Her goal is to make robots a bit more like the animals they’re frequently designed to emulate.
For the past 30 years, robots have played an increasingly important role in medicine. Largely used now in surgical theaters, these programmable devices allow for greater precision, smaller incisions, and faster healing times. But because they consist primarily of mechanical arms connected to cameras and surgical implements, they can only perform certain procedures.
Renee Zhao, 33, an assistant professor of mechanical engineering at Stanford University, wants to change that. Her lab has developed miniature robots that mimic more flexible movements. Inspired by the ancient art of origami, Zhao’s millimeter-scale robots have the strength and flexibility of an octopus arm or an inchworm.
“Although we have bones, most of the human body is actually based on soft systems. Biomedical devices need to be compatible with those systems,” says Zhao. “It made sense to find inspiration and replicate what is in nature, because nature is already optimized.”
Using a pattern first developed by Biruta Kresling, an architect who has investigated folding structures, Zhao created tiny cylindrical robots that can twist and buckle while maintaining their stability. Tiny grains of magnetic material embedded in the robot allow Zhao to pilot the device using magnetic fields.
The size and dexterity of these bots make them appealing tools for breaking up clots, delivering drugs to specific areas, or providing images of the body’s inner workings. Zhao’s lab is now experimenting with biodegradable materials, which would also allow the robots to break down safely in the body after completing their tasks.
“Going forward, we’re going to be working closely with doctors to identify real clinical needs,” says Zhao. “We don’t want to solve an artificial or imaginary problem. We want to use our expertise to help doctors tackle specific challenges.”