Astronauts are among the most physiologically and psychologically fit individuals in the world. They are trained to keep calm even in life-threatening moments and can work with extreme focus over long periods of time.
Nevertheless, living, working, and sleeping in confined spaces next to the same people for months or years at a time would be stressful for even the toughest recruit. Astronauts also have to deal with the unique physical strains of space travel—including the effects of microgravity, which whittles away at bone and muscle mass, creates fluid shifts that puts painful pressure on the head or other extremities, and weakens the immune system.
An AI assistant that’s able to intuit human emotion and respond with empathy could be exactly what’s needed, particularly on future missions to Mars and beyond. The idea is that it could anticipate the needs of the crew and intervene if their mental health seems at risk.
Thanks to Stanley Kubrick and HAL 9000, the idea of AI in space has unfortunate connotations. But NASA already works with many different kinds of digital assistants. For example, astronauts on the International Space Station recently greeted a new version of IBM’s medicine-ball-size emotional robot, called CIMON (for “crew interactive mobile companion”), to assist them in their various tasks and experiments for three years. (The results so far are mixed.)
These current robots are stunted by a lack of emotional intelligence, says Tom Soderstrom, CTO at NASA’s Jet Propulsion Laboratory. That’s why JPL is now working with Australian tech firm Akin to develop an AI that could one day provide emotional support for astronauts on deep-space missions. “That’s the piece that excites me the most about Akin,” he says. “We want to have an intelligent assistant that can control the spacecraft’s temperature and direction, figure out any technical problems—that is also watching human behavior.”
Akin CEO Liesl Yearsley says the goal is not for the AI to simply run tasks and set reminders like Alexa or Siri, but to act as a companion that provides empathetic support. “Imagine a robot that’s able to think, ‘Mary’s having a bit of an off day today—I’ve noticed she seems a bit curt with her colleagues,’” she says. “The AI might then decide it’s prudent to make sure Mary’s on top of her agenda for the day, and find a way to be a little more nurturing and encouraging to mitigate some of the stress. Those are the sort of deeper layers we want to be able to process.”
Keeping track of a crew’s mental and emotional health isn’t really a problem for NASA today. Astronauts on the ISS regularly talk to psychiatrists on the ground. NASA ensures that doctors are readily available to address any serious signs of distress. But much of this system is possible only because the astronauts are in low Earth orbit, easily accessible to mission control. In deep space, you would have to deal with lags in communication that could stretch for hours. Smaller agencies or private companies might not have mental health experts on call to deal with emergencies. An onboard emotional AI might be better equipped to spot problems and triage them as soon as they come up.
The Akin partnership utilizes JPL’s new Open Source Rover project, which makes publicly available the basic designs of actual Mars rovers like Curiosity. Interested students and young engineers can learn to build their own six-wheel rovers for about $2,500. Over the past year, Yearsley and Soderstrom have been using Open Source Rover to test and develop Akin’s emotionally intelligent AI. The result is a rover dubbed Henry the Helper. Currently puttering around the JPL grounds and conversing with employees and site visitors, it demonstrates the AI’s ability to interact with humans and recognize human emotion.
Henry, like many other AI systems, uses deep learning to recognize patterns in human speech and facial expressions as they relate to emotional intent. It is then programmed to respond to those cues in appropriate, empathetic ways—such as offering directions or information to any tourists who seem lost or confused.
Later this year, the company will roll out two more prototypes: Eva the Explorer and Anna the Assistant. Eva’s mainly a more autonomous Henry, outfitted with more sensors that will allow the AI to pick up on subtle speech and facial expression cues as it participates in more complex conversations. Anna will be more like an autonomous lab assistant that anticipates the needs of JPL employees—taking notes, answering questions, handling objects and tools, and troubleshooting problems.
And in just a few years, Akin hopes to see Fiona the Future come to life. Fiona wouldn’t even necessarily be a physical robot, but rather a cross-platform system running on a spacecraft like Gateway (NASA’s upcoming lunar space station), or a habitat on the moon or Mars. There’s no commitment yet for this to be part of Artemis or Gateway, but the company is working actively with other players in the space industry to ink some sort of initiative. Yearsley says any hope of making Fiona a part of Gateway or Artemis means Akin must have reliable prototypes out by September. Should that fail, Akin will see if its AI can be tested in more isolated settings, like Antarctica, or in different contexts, such as assisting elderly or disabled people.
To make the AI work in the isolation of space, the system will rely on edge computing—moving computation and data storage away from large centers and relying more heavily on local storage and caching, with vastly reduced energy footprints. “There is no more literal edge than space,” says Soderstrom.
Akin’s biggest obstacles are those that plague the entire field of emotional AI. Lisa Feldman Barrett, a psychologist at Northeastern University who specializes in human emotion, has previously pointed out that the way most tech firms train AI to recognize human emotions is deeply flawed. “Systems don’t recognize psychological meaning,” she says. “They recognize physical movements and changes, and they infer psychological meaning.” Those are certainly not the same thing.
But a spacecraft, it turns out, might actually be an ideal environment for training and deploying an emotionally intelligent AI. Since the technology would be interacting with just the small group of people onboard, says Barrett, it would be able to learn each individual’s “vocabulary of facial expressions” and how they manifest in the face, body, and voice. It could come to understand how these expressions change in the context and environment of a space mission, under social settings involving other astronauts. “Trying to do this in a closed environment, for one or a few individuals, might actually be a more approachable problem than trying to do so in an open environment,” she says.
How the James Webb Space Telescope broke the universe
Scientists were in awe of the flood of data that arrived when the new space observatory booted up.
NASA’s return to the moon is off to a rocky start
Artemis aims to deliver astronauts back to the lunar surface by 2025, but it’s riding on an old congressional pet project.
James Webb Space Telescope: 10 Breakthrough Technologies 2023
A marvel of precision engineering, JWST could revolutionize our view of the early universe.
What’s next in space
The moon, private space travel, and the wider solar system will all have major missions over the next 12 months.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.