Skip to Content

MIT Mini-Satellites Ready To Fly

Small satellites that can orient themselves and link together could lead to better space exploration.

On May 18, after a delay of more than three years, astronauts aboard the International Space Station (ISS) will take a soccer-ball-sized experimental satellite designed by MIT students, and place it in midair, somewhere in the middle of the station’s interior. Then they’ll watch as the satellite uses its built-in computer software to maintain its position and orientation, and later tries to find its way to small beacons attached to the station walls.

MIT students testing the SPHERES satellite prototypes aboard NASA’s zero-gravity aircraft. In those tests, air turbulence posed some difficulties, as the plane was jostled around, but that won’t be a problem in the upcoming space station tests. (Photo courtesy of NASA.)

The colorful satellites, called SPHERES, for Synchronized Position Hold Engage and Reorient Experimental Satellites, were first prototyped as part of an undergraduate MIT course in 1999 taught by aero-astro professor David Miller, who continues to run the project, and then refined as part of a graduate student project. The final flight hardware – the actual SPHERES used on the ISS – were built by Cambridge-based Payload Systems under the direction of MIT’s Space Systems Lab, and have plastic shells that are each a different color to make it easier to keep track of them when they fly together.

[For images of these small satellites and their testing, click here.] 

The first satellite arrived on the space station last week, delivered by a Russian Progress resupply vessel. Two more are scheduled to be brought on the next two U.S. space shuttle missions (assuming these get off the ground). The satellites were ready to be delivered to the ISS back in 2003, just before the Columbia accident shut down the shuttle program.

Along with the satellites, the system involves a series of beacons, each about the size of a TV remote control, that will be attached to the space station walls at various points. These will emit ultrasound signals to provide a set of reference points, so the satellites can determine their exact positions and which way they are pointing. In actual free-flying satellites, these would be replaced by GPS signals.

This month’s experiment is all about software: developing and testing systems for the operation and coordination of autonomous satellites and spacecraft in the future. “The relevance is in the algorithms,” explains Jonathan How, a professor in MIT’s aero and astro department who’s assisting in the program. By operating inside the space station, with astronauts present to monitor activities - yet with real zero-gravity (or technically, microgravity) – a lot of testing can be done at low cost, compared with using satellites in space. “It’s a way of buying down the risk,” says How.

The algorithms in one of these micro-satellites constantly monitor and compare the arrival time of the signals from the different beacons in order to compute the relative distances of each beacon, and thus derive the satellite’s exact position. (The principle is the same as counting the interval between a lightning flash and its thunderclap to figure out the distance away of a storm.) The software must compute, ten times per second, the timing of each impulse, to derive what direction it’s coming from, so that the satellite’s reaction-control jets can be powered to bring the satellite to its intended position. Making sure the software doesn’t produce unexpected oscillations or movements is part of the test’s goals.

This is only the first phase, however, of a planned series of tests over years, involving three of the satellites, which the researchers will attempt to fly in tight formation inside the ISS. Each of the mini-satellites will control its position and orientation through a combination of internal gyroscopes and puffs of carbon dioxide through reaction jets.

Other groups, including ones at Stanford and NASA’s Marshall Space Flight Center, have done tests of formation flying and automated docking, using flat air-tables on the ground and therefore allowing only two-dimensional movements, or underwater with the complication of water’s viscosity, or using blimps that allow for 3-D movement, but with the problem of air turbulence. And all these environments are obviously different from those faced by actual satellites.

The MIT group’s initial tests on NASA’s zero-gravity airplane (colorfully named the “vomit comet”) in 2000 and 2001, and now on the space station, are the only U.S. tests of such multiple-satellite systems so far performed in a true weightless environment. Japan conducted a successful multiple-satellite test several years ago, though, and NASA attempted a single-satellite docking with a military satellite in 2005, but that single-shot test ended with a collision.

“On the International Space Station, we can afford to be more aggressive in what we test [than is possible with a real satellite],” says Simon Nolet, an MIT graduate student who’s been working on the project. “If we fly out of control, the astronauts can just grab it and start again. That makes it possible to test algorithms we would be scared to fly in a real satellite.”

The research has potential applications for both civilian and military space missions, which is why both NASA and DARPA have provided funding. In the near term, the software developed in these tests could lead to fully automated docking satellites, which could be crucial for such missions as taking samples on Mars, where a return spacecraft will have to be able to rendezvous in Mars orbit with a sample-carrying craft sent up from the surface. Such docking capabilities could also be used for the in-orbit assembly of large rockets, for example, in a manned Mars mission, which are too big to be launched from Earth in one piece.

The most ambitious application would be for projects such as NASA’s Terrestrial Planet Finder. This is to be a constellation of separate spacecraft, each carrying a telescope, whose light beams will be focused together into a central optical detector – providing the resolution equivalent to a single huge telescope. This would make it possible for the first time to get direct images of Earth-sized planets around other stars, and even perform spectrographic measurements that could detect any signs of life. But doing so requires an incredible degree of precision in the alignment and pointing of the separate craft – a feat that has to be thoroughly tested before the investment in such a multibillion-dollar system.

“We don’t want it to just work,” says How, “we want it to work robustly.”

Keep Reading

Most Popular

DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.

“This is a profound moment in the history of technology,” says Mustafa Suleyman.

What to know about this autumn’s covid vaccines

New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.

Human-plus-AI solutions mitigate security threats

With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure

Next slide, please: A brief history of the corporate presentation

From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.