Skip to Content

From the Lab: Information Technology

New publications, experiments, and breakthroughs in information technology – and what they mean
September 1, 2005


Flying Robot
Thirty-gram aircraft steers itself

Results: Swiss researchers have built a robotic aircraft with an 80-centimeter wingspan that flew indoors for about four minutes, detecting walls and automatically turning away from them, thanks to two one-gram cameras, a gyroscope, and a small microcontroller onboard.

Why It Matters: Small robots that can operate inside buildings or in tight spaces like caves or tunnels may be useful for search-and-rescue, reconnaissance, and inspection applications. Researchers have previously tested larger flying robots outdoors with fewer obstacles and indoors doing limited maneuvers like landing. Here, Jean-Christophe Zufferey and Dario Floreano of the Swiss Federal Institute of Technology in Lausanne have shown that a smaller aircraft can fly indoors for a relatively long period of time while successfully avoiding collisions.

Methods: The researchers made their aircraft out of carbon-fiber rods, balsa wood, and thin plastic film for the wings and tail. They mounted one video camera on the leading edge of each wing and connected the two cameras to a low-power microcontroller near the front of the aircraft, behind the motorized propeller. The microcontroller grabbed images from the cameras about 20 times per second and calculated how fast obstacles like walls appeared to be moving toward the aircraft. As objects got closer, the cameras saw them as moving faster. The microcontroller recognized a certain threshold speed as an indication that an obstacle was getting too close and sent signals to the rudder to turn the plane about 90 degrees.

However, the side-to-side movements of the plane’s nose – its “yaw” – also affected the speed at which obstacles appeared to be approaching, confusing the plane’s obstacle avoidance system. To counter this effect, the researchers placed a gyroscope behind the propeller that measured its yaw rotation speed. The microcontroller took this data into account when analyzing the camera images.

The researchers tested their obstacle avoidance algorithm on their aircraft in a 256-square-meter arena. The walls of the arena were made of wide vertical strips of black and white cloth to enhance the contrast of the obstacles and make them more visible to the cameras. The researchers controlled the plane’s altitude manually with a joystick and a wireless connection.

Next Step: The researchers are working on a 12-gram, 40-centimeter-wingspan aircraft with lighter and smaller electronics so that it can fly in smaller rooms. They are also integrating an automatic altitude-control system into their plane to make it fully autonomous. And they are putting more-sensitive cameras on board, so the plane can detect obstacles that don’t have high-contrast coloration.

Source: Zufferey, J.-C., and D. Floreano. 2005. Toward 30-gram autonomous indoor aircraft: vision-based obstacle avoidance and altitude control. Proceedings of the IEEE International Conference on Robotics and Automation 2005, pp. 2605-2610.


Mini Modulator

A key device for silicon optics gets tiny

Results:
In an important step toward integrating optoelectronics into silicon chips, researchers at Cornell University have fabricated a silicon modulator – a device that converts electronic signals into optical ones – roughly 12 micrometers wide, about a thousand times smaller than previous silicon electro-optical modulators.

Why It Matters: As chip makers pack more transistors on silicon, problems such as heat generation from electrical resistance and electrical interference between closely spaced wires threaten to degrade performance. Many believe that optical connections – which transmit information in the form of light pulses instead of electric current – offer a way around these limitations. Researchers have long been striving to produce optical devices that can be easily integrated into silicon (see “Intel’s Breakthrough,” July 2005). Electro-optical modulators are vital to this plan, but current silicon versions of them are too large to fit easily onto a chip. The dramatic drop in size that Michal Lipson and her colleagues demonstrated makes a chip-based modulator seem more feasible.

Methods: To build their modulator, the Cornell researchers etched a small piece of silicon to form a 12-micrometer-diameter, 250-nanometer-tall raised ring. They positioned this ring next to a straight ridge, known as a waveguide, just 450 nanometers wide. A beam of laser light traveling down the waveguide will either pass the circular section – the “ring resonator” – without interacting with it or be diverted into it, depending on the wavelength of the light. The refractive index of the silicon and the circumference of the ring determine what wavelength of light the resonator diverts. Applying a voltage from the interior of the ring to the area just outside it creates free electrons and positively charged “holes” within the ring that change its refractive index. By using a varying voltage to either shutter light or let it pass through the waveguide, the researchers encoded information onto a laser beam at a rate of 1.5 billion bits per second.

Next Step: The researchers believe that their device will be able to modulate signals at more than five billion bits per second, once they make some refinements, such as improving the electrical contacts that supply the input signals from the rest of the circuit.

Source: Xu, Q., et al. 2005. Micrometre-scale silicon electro-optical modulator. Nature 435:325-327.


Greater Graphics

Chip renders high-quality images in real time

Results:
Researchers from Saarland University in Saarbrucken, Germany, have developed a prototype chip that can render desktop computer graphics in real time using a sophisticated technique called ray tracing. Ray tracing produces more-realistic and higher-quality graphics than other techniques, but it previously required a cluster of PCs for real-time performance. Now, the researchers, led by Philipp Slusallek, have shown that a single chip can use ray tracing to render simple scenes at 20 frames per second. (The frame rate for movies, television, and video games ranges from 24 to 30 frames per second.) The chip rendered more-complex scenes at fewer than 10 frames per second.

Why It Matters: The conventional computer-graphics rendering method, called rasterization, doesn’t handle shadows or reflections well, resulting in lower-quality images. Ray-tracing algorithms simulate the physics of light more accurately and make complex scenes look more realistic. But on a single computer, they can take several minutes or even hours to render one image. By implementing the algorithm on a chip, the researchers have provided a way for one PC to do the job in real time, making high-quality rendering cheaper and feasible for home computers.

Methods: The researchers designed a new architecture for their chip that is optimized for the ray-tracing algorithm. They arrived at their design by experimenting with chips called field-programmable gate arrays, which can be reconfigured into different circuit patterns. They then used their chip, running at 66 megahertz, to render 11 different scenes, some taken from computer games and some that were standard scenes used by graphics researchers. They measured such performance characteristics as how many frames the chip generated each second.

Next Step: With the chip’s design finalized, the researchers will use more-standard integrated-circuit techniques to build a new version that can accommodate more processors and render complex scenes faster than 10 frames per second – and that can be cheaply mass-produced. To adopt real-time ray tracing, computer-game programmers would need to slightly change the way they build the graphics for their games.

Source: Woop, S., J. Schmittler, and P. Slusallek. 2005. RPU: A programmable ray processing unit for realtime ray tracing. ACM Transactions on Graphics 24:434-444.

Keep Reading

Most Popular

Scientists are finding signals of long covid in blood. They could lead to new treatments.

Faults in a certain part of the immune system might be at the root of some long covid cases, new research suggests.

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.