Skip to Content

Mapping the Planets

A new LIDAR system could provide more data on distant planets.

Researchers at the Rochester Institute of Technology (RIT) and MIT are developing a new generation of LIDAR (light detection and ranging) technology to map planetary bodies in more detail than ever before. These maps could help further our goals to explore outer space by providing more data about the geography and topography of the planet so that landing sites can be selected for future missions. The advanced LIDAR system could also be used to analyze the atmosphere on other planets to find out critical information about biohazards, wind speed, and temperature.

Testing in a vacuum: This helium-cooled vacuum device, known as a Dewar test system, will be used to determine the effectiveness of a new LIDAR system for mapping planets. LIDAR’s sensors are placed in the Dewar test system to create a carefully controlled environment in which to determine their efficacy and accuracy.

LIDAR works on a similar principle to radar, but through the use of lasers rather than radio waves. The laser is shot at an object, and the time delay between the pulse and the reflection is measured in order to accurately gauge the distance. The advantages of LIDAR over radar are twofold: LIDAR can be used to measure smaller objects, and it works on a greater variety of materials.

Professor Donald Figer and his team at the Rochester Imaging Detector Laboratory (RIDL), along with researchers at MIT’s Lincoln Laboratory, have been awarded $547,000 in funding from NASA toward developing new light sensors. If their work is successful, the researchers could be awarded an additional $589,000 for fabrication and testing.

The current LIDAR technology used by NASA has trouble distinguishing between objects with a height difference of less than one meter. With the new sensors, objects with differences down to one centimeter should be distinguishable.

The project focuses on the development of a low-power, continuous two-dimensional sensor array. Once the array is completed, the researchers hope that it will be able to capture data from a wide laser scan, in contrast to the current array, which gathers measurements using point-by-point readings. The pixel resolution of the scans is also greatly increased, from kilometers square to a few feet by a few feet. A prototype currently exists at Lincoln Laboratory. RIDL will soon begin evaluating the device while concurrently improving the design.

Right now, NASA is working on a different method of improving LIDAR for the Lunar Orbiter Laser Altimeter. LOLA is designed for the Lunar Reconnaissance Orbiter, which is scheduled for launch no earlier than November 2008. LOLA will provide a detailed topographic map of the moon’s surface to increase surface mobility and exploration for lunar missions. Unlike what Figer and his group are doing, the LOLA improves resolution by having five lasers and five receivers working simultaneously. Figer’s system uses one laser, but a beam expander will separate the beam, sending it off at a number of angles. Once the constituent beams are reflected off the objects being measured, the beams are recombined and then analyzed with the new sensors.

The reason for the increase in resolution is not an improvement in the laser itself, but a function of the increased scanning speed. Previously, LIDAR would only be able to scan point by point, so the amount of time required to generate a higher-resolution map was often prohibitive. With the new LIDAR’s ability to split the laser beam and scan large areas of landscape at once, this time period is significantly reduced. “It would be impossible to take the single pixel maps to one foot and cover the planet,” says Figer. “But if you have an imager, now things become more possible.”

The improvement in measuring depth is attributable to a new generation of high-speed circuitry that is able to differentiate two signals arriving only 100 picoseconds apart, which equates to a centimeter in height.

Figer’s faster system might also be better at mapping objects in motion. Due to the slower speed of the current technology, moving objects can appear multiple times in multiple scans, which makes it difficult to accurately reproduce a single point in time.

While the system is primarily designed for extraplanetary missions, Figer believes that it could be used in other ways. “Imagine,” he says, “that you have this 3-D, 180-degree fish-eye system … in every city scanning continuously for biohazards.”

Keep Reading

Most Popular

Scientists are finding signals of long covid in blood. They could lead to new treatments.

Faults in a certain part of the immune system might be at the root of some long covid cases, new research suggests.

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.