Skip to Content
Uncategorized

Avoiding the Rough Spots

Although today’s commercial aircraft are more than tough enough to withstand being bounced about by air pockets, sometimes passengers aren’t: Turbulence in otherwise calm stretches of air is the leading cause of in-flight injuries. Seeing turbulence ahead of time could save airlines millions of dollars a year, by averting in-flight injuries and also by saving fuel wasted in churning through bumpy air. The National Aeronautics and Space Administration (NASA) is testing a sensor device that could do just that.

The device, designed and built for NASA by Coherent Technologies of Lafayette, Colo., uses LiDAR technology. LiDAR is the optical analog of radar: Instead of radio waves, pulses of infrared light are transmitted, some of which bounce off particles and back to a sensor. NASA’s sensor detects the changing velocities of tiny particles in turbulent air, creating a picture of the rough air ahead.

The sensor now only “sees” straight ahead. But the goal is to be able to scan horizontally and vertically to get a three-dimensional picture of the turbulence. At this point, the laser-based sensor can see approximately four miles ahead, which for a commercial jet translates to a warning time of 10 to 30 seconds.

“They’d like five minutes,” says Rod Bogue, project manager at NASA’s Dryden Flight Research Center in Edwards, Calif. “But 10 to 30 seconds is better than nothing.” Just ask anybody who’s been through turbulence lately.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.