Skip to Content

Giant Camera Tracks Asteroids

The camera will offer sharper, broader views of the sky.
November 24, 2008

The first of four new asteroid-tracking telescopes will come online next month in Hawaii, promising to quickly scan large swaths of the sky–thanks to the world’s largest digital camera.

Keeping watch: The prototype Pan-STARRS telescope, PS1, focused on Comet Holmes during trials in 2008. The detail is about one-half of what is expected when the telescope goes online in December.

The project, known as the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS), aims to scan the entire sky visible from the summit of Mount Haleakala in Maui Island, Hawaii, three times a month, searching for asteroids and near-Earth objects (NEOs) as small as 300 meters in diameter. At the heart of each telescope is a 1.4-billion-pixel digital camera that can photograph broad swaths of the night sky in sharp detail.

The first prototype telescope using the camera will go online in December. This telescope will scan the night sky, searching for asteroids and comets that could pose a threat to Earth. Pan-STARRS is designed to have at least three times the collecting power of current NEO telescopes.

The Pan-STARRS’s cameras, each consisting of a 40-centimeter-square array of charge-coupled devices (CCDs), bring new technology to the optics used in astronomy. Perhaps the most innovative aspect is the ability of each CCD cell to electronically shift an image to counteract atmospheric blur and deliver clearer astrophotography, says Barry Burke, a senior staff member at MIT’s Lincoln Laboratory, which makes the cameras.

“The atmosphere is the limit to the quality of the image, but there is a special feature of these chips that allows them to remove some of the blur due to atmospheric effects,” Burke says. “It allows the image to be shifted in any direction in the chip in a way that matches the motion of the stars and that takes out a significant part of the blur.”

Known as orthogonal transfer CCD (OTCCD), the technology uses electronics to adjust the image rather than mechanically tilting a camera’s lens or mirror, a more common technique used in consumer cameras that have optical image stabilization. Because the process is electronic, the technology can be distributed to each cell of the CCD array, allowing for much more granular adjustments to localized atmospheric turbulence. The result is an image that is sharper than what a ground-based observatory could produce.

The mosaic structure of the CCD camera also leads to a more reliable system and less expensive manufacturing costs, Burke says. “The chip could not possibly be made to that size, so we are forced to break the camera down into tiles,” he says.

Each Pan-STARRS camera consists of an eight-by-eight array of devices, each containing an eight-by-eight array of CCD cells. The size of each cell–about six millimeters on a side–is determined by a sweet spot: if the chips where much larger, the number of defects on them–and thus the overall cost of making them–would be too great; if they were much smaller, it would become much more difficult to organize them into the camera’s focal plane.

Many eyes: Each component of the orthogonal transfer CCD array consists of a five-centimeter device made up of 64 CCD chips. The large eight-by-eight array only contains 60 devices because the corner elements would be too far from the center of the focal plane to collect useful data.

Such a design will likely be the way of the future for very large focal-plane cameras, says Donald Figer, an astronomer and the director of the Rochester Imaging Detector Laboratory (RIDL), in New York.

Tiling the camera’s focal plane into numerous CCDs and using the orthogonal transfer technology allows it to avoid a problem that often affects larger CCD chips, Figer says. This issue, called blooming, occurs because of contrasts in the intensities of light coming from a field of stars. A very bright star can create a large electrical charge in a particular row and column of a CCD chip, because its intensity overwhelms the part of the sky imaged on the chip. CCDs deliver their data along the rows and columns of the semiconductor circuits, so a strong light signal can overwhelm the other pixels in the same row and column. But by using many chips, the effect can be localized, and by moving the image using orthogonal transfer, the peak intensity can be corrected.

“The orthogonal transfer capability allows it to shuffle charge along the segments,” Figer says. “It allows you to effectively get a clearer image. Other cameras do something like that, but they do it by deforming the mirror.”

Pan-STARRS’s approach is different from that used in large telescopes in other observatories, such as the Keck Observatory’s two 10-meter telescopes on Mauna Kea, in Hawaii. Large telescopes typically use adaptive optics to correct for atmospheric turbulence by taking advantage of a bright object, known as a natural guide star, near the target. By adjusting the telescope’s image to correct for aberrations detected in the guide-star image, a much clearer picture–corrected for atmospheric turbulence–results. However, in 99 percent of viewing cases, a natural guide star is not available, so Keck 1 and Keck 2 use a laser guide star, which is created by sending a sodium-wavelength laser beam into the upper atmosphere to excite a thin layer of sodium atoms there. This creates a reference point near the target of observation, similar to a natural guide star.

A ground-based telescope equipped with adaptive optics can produce images with a resolution comparable to that of the Hubble telescope. However, the approach is too expensive for smaller telescopes, such as the 1.8-meter Pan-STARRS scopes. At lower cost, however, the image correction performed by the OTCCDs results in a picture of similar, if not quite as good, quality.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.