Skip to Content

How to Prevent Columbia-like Space Shuttle Disasters

The techniques developed for analyzing the data from particle accelerators could help spot debris impacts during space shuttle launches.

When and if the Large Hadron Collider finally rumbles into action, it will produce a firehose of data like nothing physicists have ever seen. Ths data will consists if the tracks from the debris from roughly a billion collisions per second, as measured by particle detectors clustered around the collision sites.

That’s far too much data to analyse in detail, so most of will be simply discarded using a simple filtering system that looks for trajectories of interest and stores them. That process should end up filtering roughly a hundred events per second for later detailed analysis. And all this must be done in real time, since any delay would rapidly overwhelm what buffering facility the accelerator has.

So what’s all this got to do with the space shuttle? It turns out that a group of engineers at NASA want to use a similar mechanism to analyse the trajectory of debris around the space shuttle as it takes off. Their goal is to use the trajectory of these debris particles to work out their mass and density and also to trace their origin. With the right kind of analysis, it ought to be possible to flag up potentially damaging trajectories as they occur.

There’s no need to to spell out why that’s important, but here goes. In 2003, the impact of debris with the space shuttle Columbia during launch, so damaged the vehicle that it was unable to survive re-entry. A better analysis of that incident might have identified the extend of the damage and so prevented the loss of that shuttle.

Philip Metzger at the Kennedy Space Center and buddies have built the first stage of a filtering system that could do that job in real time using a pair of cameras that take high resolution of the launch from different angles. Together,this footage gives a 3D view of the launch allowing a computer to reconstruct the trajectory of any debris.That’s not rocket science but, strangely, it has never been used to analyse launches.

Metzger and co have put their idea through its paces by analysing a piece of debris thrown up during the launch of STS-124, in May 2008. At the time, NASA engineers worried that this debris was a brick from a flame trench beneath the shuttle. A brick hitting the shuttle during launch could have caused significant damage.

The new technique, however, shows that the debris particle is low density foam, almost certainly from the solid rocket booster throat plug. This would have posed little threat to the shuttle.

Of course, coming to that conclusion, a year later is of little use to the shuttle crew who need to assess the conditioning of their vehicle almost immediately and certainly before they embark on re-entry.

That’s where the LHC-like filtering mechanism comes in. Metzger at al say the data is easy to collect using their two cameras but the trouble is combing through it for interesting and useful insights. An LHC-like filtering system would simply comb through it during the launch and filter out only those debris tracks that are dense and massive enough to pose a threat.

That could save lives and although the Shuttle is due to be retired by this time next year, the process could easily be applied to future rocket launches anywhere round the world.

Ref: arxiv.org/abs/0910.4357: Photogrammetry and Ballistic Analysis of a High-Flying Projectile in the STS-124 Space Shuttle Launch

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.