Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

Gallager wasn’t even sure that he wanted to study information theory, the burgeoning new discipline born of Shannon’s 1948 paper. But before joining the Army Signal Corps, Gallager, too, had worked for several years at Bell Labs, where he spent three days a week in a classroom learning about the latest advances in electrical engineering. Though he’d never met Shannon, that experience helped him recognize the scope of his accomplishment. “I just viewed him as sort of a god,” Gallager says.

Indeed, by the time Shannon joined the MIT faculty, he was a minor celebrity. As early as 1953, an article on information theory in Fortune magazine had declaimed, “It may be no exaggeration to say that man’s progress in peace, and security in war, depend more on fruitful applications of information theory than on physical demonstrations, either in bombs or in power plants, that Einstein’s famous equation works.”

What captured the public imagination was the idea that information in all its diversity–text, audio, video–could be boiled down to mere sequences of 1s and 0s. Commercial digital devices did not yet exist, so it blew people’s minds that 001001010101000101011101 could represent part of a symphony, or part of a movie, or a color, or a line from a book. But as Shannon pointed out in his paper, his Bell Labs colleague Ralph Hartley had made a similar suggestion 20 years earlier. The aspect of the paper that captivated–and continues to captivate–Shannon’s fellow engineers was the ingenious way he proved that there must be some code capable of producing error-free data transmissions right up to a channel’s capacity.

To understand how an error-correcting code works, consider someone who’s trying to send a four-bit message across a noisy channel. If the noise causes one of the bits to flip to its opposite, the receiver has no way of knowing that an error has occurred. Simply repeating the message, so that 0011 becomes 00110011, solves that problem: now, if one bit flips to its opposite, the receiver knows there’s an error, because the two versions of the message don’t match. But it’s impossible to tell which one is correct. A better way to encode the message might use the four extra bits to represent information about the message bits: the fifth bit, for instance, could tell you whether the first two bits of the message have the same or different values; the sixth bit could do the same for bits three and four, the seventh for bits one and three, and the eighth for bits two and four. If one of the first four bits gets flipped, the last four can identify it; if one of the last four bits gets flipped, the other three may convey enough information to make up for it.

Shannon’s paper, however, eschews any such ruminations about how to actually construct codes. Instead, it approaches the concept of error correction by statistically analyzing the general properties of codes selected entirely at random. To get a sense of his approach, it might help to see how it could be applied to our hypothetical eight-bit sequences that encode four-bit messages.

2 comments. Share your thoughts »

Credit: Courtesy of the MIT Museum

Tagged: Computing

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me