Skip to Content
Uncategorized

Building a Better Backbone

Surging Internet growth has put pressure on telecom networks to keep up. Their most advanced R&D is going toward expanding the capacity of the long-haul cables that cross continent

On a giant screen at the Corning Museum of Glass in upstate New York, video images flash by-news footage of a war, an inauguration, a spaceshot, a game show-along with real-time projections of museumgoers staring up in wonder. The source of all these images? A strand of glass, thinner than a human hair, yet wide enough to carry more information than three million copper wires, the technology it replaced. Cor-ning is justified in showing off its invention: optical-fiber technology ranks as one of the technological miracles of the 20th century.

Too bad we’re in constant need of new miracles to keep up with the voracious network demands that this century is placing on these thin glass fibers. Fiber optics is, after all, a pre-Web technology; and much of the fiber that carries-in addition to telephone conversations-today’s e-mail messages, music downloads and video streams was installed before most people were even aware of those media. What used to seem like a shameless waste of capacity now seems woefully inadequate. Our appetite for bandwidth is growing at an exponential rate, with no sign of slowing. Tracey Vanik, technical director at telecommunications consulting firm RHK, compares the Internet to Star Trek’s voracious Borg: “Whatever bandwidth is made available, the Internet will swallow.”

Optical fiber made by Corning, Lucent Technologies and other giant telecom suppliers is found throughout the telecommunications system, connecting us when we browse our favorite Web sites or place calls to Tokyo. But much of the cutting-edge research being done today on fiber optics goes into improving the capacity of the system’s “backbone”: the fattest of the fat data pipes, which whip data across continents and connect urban centers.

“Backbone” is a convenient metaphor-but it gives too neat a picture. A vertebrate organism has a single backbone, but the telecom system doesn’t; no single company owns these high-capacity interurban cables, and no one organization makes sure they are up to the challenge of meeting worldwide bandwidth demands. In some cases telecommunications companies-the WorldComs and Sprints and AT&Ts of the world-will seek to cover high-traffic routes with their own cables, laying spaghetti-like strands parallel to one another along highway and railroad rights-of-way, linking metropolitan loops across continents and oceans. In other cases, carriers lease optical-fiber cables from other carriers; indeed, some carriers are solely in the business of leasing backbone capacity.

All carriers, though, are faced with the same challenge: how to stay ahead of the bandwidth demand curve. Research at Corning and elsewhere shows that every improvement in performance comes at a price; building a better backbone seems to be a question of choosing just the right trade-offs.

Beefing Up Optics

The simplest solution for stiffening the backbone is simply to lay more cable. But that’s also the most expensive alternative: as much as 40 percent of the cost of a fiber-optic system goes toward purchasing rights-of-way, getting permits and putting cable in the ground. (It’s an old joke in telco companies that they’d gladly give up new technologies if someone would just show them how to dig a cheaper ditch.)

Two other ways to increase capacity avoid digging up the streets, relying instead on state-of-the-art equipment installed in the telephone offices where the fiber-optic strands terminate. Engineers can develop methods to increase the number of channels of information each fiber-optic strand can carry. Or they can develop ways to make the data travel faster along each channel.

Both approaches avoid the enormous cost of installing new lines. But each strategy is tricky, since making improvements in one area often causes problems in another. “There’s a strong trade-off between distance and capacity,” says Roe Hemenway, manager of network equipment research at Corning. “The further you go, the lower the capacity. We’re being asked to put more capacity on the fiber, go longer distances, and do it with even higher quality.”

Hemenway works in the laboratory at Corning’s Sullivan Park Research and Development Facility in upstate New York, where shelves hold rows of metal boxes, each one a laser that generates an infrared beam. The beams run through modulators and multiplexers, amplifiers and filters, traveling the same loop of fiber-optic cable over and over again to simulate distance, much like a digital race car on the information-superhighway version of a test track. At the end of the system a computer screen displays the number of errors produced during the run, and an oscilloscope shows graphically whether the signal came out sharp or blurry.

The setup allows Corning engineers to test how each component affects signal transmission, and what a change in one does to the system as a whole. This approach is critical to fiber-optic design, because whatever solution evolves to make fiber optics more efficient is likely to include a number of technologies, each of which might affect the others.

In the last six years, transmission speeds in the labs for the fastest fiber optics have quadrupled, and another fourfold increase is expected this year. The most pressing question is whether, given all the trade-offs, the current rate of improvement can be maintained. “I could give you a macho answer that we’re going to continue to improve fiber, but quite frankly, I don’t know,” says Joseph Antos, technology director for fiber development at Corning. “Every new invention [to increase capacity] gets harder and harder.”

More Channels per Fiber

Data travels along optical fiber through a series of light pulses from a laser, the offs and ons corresponding to the ones and zeroes of digital coding. Fiber-optic systems use the light spectrum that travels most efficiently through the glass, wavelengths between about 1,300 and 1,600 nanometers. Outside of these wavelengths light tends to be either absorbed and lost or stretched too far to make a usable signal. And of the available spectrum, most transmission takes place in what’s called the “central band,” between 1,530 and 1,565 nanometers.

By breaking the signal into different wavelengths, as a prism separates the colors that make up white light, engineers can send more than one stream of light along a fiber at the same time. Early implementations divided the light into four or eight separate channels, with each fiber carrying about 10 gigabits-10 billion bits-per second. Today some systems can carry 80 channels in the central band, and are able to push more than a half-trillion bits per second down a single fiber.

But there’s a limit to how many channels can be squeezed into the central band. Like closely spaced stations on your car radio, channels that get too close cause interference. On the radio, you might be listening to All Things Considered and suddenly get the Backstreet Boys-or static. The same thing happens with optical signals. To reduce interference, current state-of-the-art systems require a buffer zone of about 50 gigahertz (a measure of frequency of a billion cycles per second) between channels.

As a result of these constraints, the central band is now essentially full, and engineers are looking to add channels by moving out of the central portion of the spectrum and into new territory.

Breaking New Ground

In order to make new parts of the spectrum-outside the central band-usable, researchers must develop new versions of devices that help push signals along optical fibers. Take the amplifiers that help boost signals, which lose energy as they bounce back and forth between the walls of the core section of the fiber. To pump them back up, engineers might use devices known as erbium-doped-fiber amplifiers. These are essentially loops of fiber laced with the rare earth element erbium. A laser excites the erbium atoms, which transfer their energy to the optical signal passing through the amplifier, increasing the distance it can travel. Without amplification, high-speed signals wouldn’t travel far enough to be useful.

Recent developments make it possible for these amplifiers to work in the longer-wavelength region of 1,570 to 1,625 nanometers, adding a new chunk of spectrum from which to carve additional data channels. Lucent Technologies, for example, has released a system that squeezes 80 channels into the central band and exploits erbium amplifiers to add another 80 channels in the long-wavelength region, doub-ling the capacity of each fiber.

Every time a signal runs through an erbium amplifier, however, it picks up noise-elements that were not a part of the original signal. Over long-distance backbones where a signal needs to be boosted many times, fiber-optic systems must be strung with regenerators, devices that reconstruct signals that have traveled through so many amplifiers that they have degraded. Regenerators take a light signal, convert it to an electrical signal, and then produce a new light beam.

A new technique called Raman amplification (see “Five Patents to Watch: Booster Shots,” TR May 2001) will allow a signal to be amplified without introducing noise-doing away with the need for regenerators and potentially creating a new way for engineers to increase capacity. Unlike erbium amplifiers, which only work at certain wavelengths, Raman amplification holds the promise of making even more new channels available. A new company, Xtera, of Allen, TX, is hoping to take advantage of Raman amplification to enable the long-range transmission of shorter wavelengths of light than current optical networks can support. “It’s kind of a new twist on using Raman techniques,” says Joe Oravetz, Xtera’s product manager, who unveiled the company’s first new product at the Optical Fiber Communication Conference and Exhibit in March in Anaheim, CA.

But using the shorter-wavelength band is a decidedly long-term strategy, since it will require installation of new equipment at every point in the network. “Going into a new band, you have to replace all the components,” says Vladimir Kozlov, an analyst at RHK. “You need new sources. You need new amplifiers. It could be very expensive.”

Speeding Up Bits

An alternative to adding channels is to make the data stream in each channel flow faster. Just as the modems in people’s homes have gotten faster, transmitters in the backbone have increased their ability to pump data, from 100 million bits per second a decade ago to a state-of-the-art 10 billion bits (10 gigabits) per second today.

While AT&T issued a press release announcing the first 10-gigabit-per-second coast-to-coast Internet protocol backbone in January, it’s already old news: 40-gigabit-per-second systems have already been announced by Lucent Technologies, Fujitsu and NEC for sale later this year. The engineering feats involved in advances like these are tremendous: increasing the data rate required engineers to design lasers that can reliably flash on and off 40 billion times per second, and receivers that can pick out one flash from the next, when they’re coming at that overwhelming rate.

But the name of the game in the backbone remains trade-offs, and speeding up transmission rates causes new complications: putting more bits per second into a fiber requires more power, and at higher powers, the interference between channels increases. Also, at these remarkable rates, tiny flaws in the glass itself start to interfere with the flow of data.

Engineers going for speed must compensate for such effects by increasing the buffer zone of unused spectrum between channels: a 40-gigabit-per-second line speed, for example, may require buffers of 100 gigahertz between channels instead of 50 gigahertz. The math is still favorable: the fibers will deliver half the channels at four times the speed, doubling capacity.

The stakes involved in improving transmission rates in the backbone, however, are so great that for every obstacle, there are teams of engineers working to overcome it. Scientists at NEC America’s Public Networks Group are working on a way to squeeze channels together, even at high speeds, by taking advantage of the fact that light is polarized. Imagine moving a jump rope rapidly up and down to make waves, which move up toward the ceiling and down toward the floor. Such waves would be “vertically polarized.” Now start moving the jump rope from side to side, so the waves move toward the walls. Your jump rope has become horizontally polarized. The NEC approach divides a light beam into 160 channels, each 50 gigahertz apart, and gives neighboring channels different polarizations. Two channels with the same polarization are thus still 100 gigahertz apart. While channels next to one another are likely to interfere with one another when they have the same polarization, channels with different polarizations are not. Such an approach will boost total capacity per fiber to 6.4 trillion bits (6.4 terabits) per second and is projected to be available in two to three years.

And improvements continue in labs worldwide. In March, researchers from the French company Alcatel, which develops fiber and components for both land-based and undersea optical systems, announced they’d developed a system reaching 10.2 terabits per second. Also in March, researchers at NEC announced an experiment in which they tweaked amplifiers to get access to a wider wavelength band, increasing transmission rates to 10.9 terabits per second.

Or Dig a Trench

All of these technological developments, of course, face this challenge: how to continue to improve performance over lines that were typically designed, manufactured and installed many years earlier. The first fiber-optic lines in a public network were installed under downtown Chicago in 1977. Today, most of the world’s long-distance traffic is carried by optical-fiber cables-more than 370 million kilometers of the stuff, all of it designed before today’s breakthroughs in the labs. Eventually there will be no avoiding the need to dig a new trench.

Once the decision is made to lay new fiber, though, new possibilities to increase its capacity emerge. The fiber strands themselves have evolved to handle ever larger capacities. Today, the state-of-the-art is “nonzero-dispersion fiber,” invented by Lucent Technologies and sold by both Lucent and Corning. This version of fiber widens the area through which a signal travels, giving it more room to spread and reducing overlap. “If you have a water pipe and you want to put more water down it, one of the ways to do that is to widen the area of the pipe, and that’s essentially what [this technology] does,” says Corning’s Antos.

Next-generation optics technology may get rid of the glass altogether. Several research groups are working on building a fiber out of new materials known as “photonic-band-gap crystals” (see “The Next Generation of Optical Fibers,” TR May 2001). Such crystals have an atomic structure that makes it physically impossible for light to pass through or be absorbed, so light striking the inside of a fiber would bounce back into the core. Doug Allen, a research associate at Cor-ning working on developing such a material, suggests that the core could be filled with air, or perhaps an inert gas. By eliminating glass and its distorting effects, he says, “you can send more wavelengths without worrying about them interfering with one another.”

All these new developments have thrust research in the lab far beyond what’s currently available in the ground. If the backbone were equipped only with developments being demonstrated in labs right now-able to carry 160 channels over each strand, at 40 gigabits per second-the bandwidth we currently use in a month could be carried over our networks in less than a second. That’s when far-flung ideas start to get real, from holographic, 3-D videoconferences that mimic real life, to long-distance surgery, to instantaneous access to books stored at any library in the world.

What remains to be solved, though, is the economics of such a network: when will it be cost-effective to put these developments in place? In something as vast as the public communications network, even small upgrades take decades to be universally deployed. Theodore Vail, first president of AT&T, succeeded in building the world’s first state-of-the-art public network only by getting Congress to declare his company a natural monopoly. That’s not going to happen this time.

Raj Reddy, professor of computer science at Carnegie Mellon University and director of the High Speed Connectivity Consortium, a program funded by the U.S. Department of Defense, nevertheless remains optimistic that a very high-bandwidth network is inevitable-that one day we’ll have always-on, all-you-can-eat bandwidth, as easily accessible as the phone system is today. “It’s definitely going to happen in 30 years,” he says. “[But] what do we have to do, and what do we have to spend, to do it in five?”

And that, in spite of the legions of fiber-optics engineers dedicated to discovering the technological miracles that will power our next-generation networks, is the question waiting to be answered. But given the remarkable spectrum of cutting-edge work being done on the backbone, it is undoubtedly there that capacity will continue to increase at the most rapid rate.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.