At Bell Telephone Laboratories on December 16, 1947, physicists John Bardeen and Walter Brattain attached three flimsy metal contacts to a thin sliver of the element germanium, applied an electric signal, and discovered that the signal emerging from their device was nearly a hundred times stronger than the one that went in. Unveiled a week later to Bell Labs executives, the new solid-state amplifier-soon dubbed a “transistor”-was “a magnificent Christmas present,” in the words of research group leader William Shockley, who only a month later conceived an improved version that eventually proved far easier to manufacture.
Fifty years later, transistors have shrunk so dramatically that they are now invisible to the unaided eye. Yet as the crucial ingredients in every microchip, acting as microscopic pumps and valves that regulate the flow of electric current, these minuscule devices continue to have a tremendous impact on almost every aspect of modern life.
It was obvious at the time that Bardeen and Brattain’s unwieldy contraption represented a breakthrough in electronics. But its inventors thought of it mainly as a replacement for vacuum tubes, which were used as amplifiers and switches in telephone equipment, radios, and most other electronic devices. Shockley had perhaps the best intuition of what was to come. “There has recently been a great deal of thought spent on electronic brains or computing machines,” he speculated in December 1949. “It seems to me that in these robot brains the transistor is the ideal nerve cell.”
The physical process Bardeen, Brattain, and Shockley discovered now lies at the throbbing heart of an electronics industry that generates worldwide sales of more than $1 trillion annually. The transistor’s greatest value is that it can be so drastically miniaturized: its fundamental operating principles have remained essentially unaltered as its linear dimensions have shrunk more than 10,000-fold. By contrast, vacuum tubes had absolutely no prospects for the kind of astonishing miniaturization that has occurred in solid-state devices. And the tubes’ other problems-they were balky, burned out too frequently, generated too much heat, and consumed too much power-proved completely insurmountable.
The first transistors were typically a centimeter long; by the late 1950s, they were measured in millimeters. With the invention of the integrated circuit in 1958, the stage was set for a steady parade of further innovations that reduced the size of transistors to submicron levels-less than a millionth of a meter. Today the transistor is little more than an abstract physical principle imprinted innumerable times on narrow slivers of silicon-millions of microscopic ripples on a shimmering crystal sea. As Intel’s cofounder Gordon Moore recently noted, there are more transistors made every year than raindrops falling on California, and producing one costs less than printing a single character in a newspaper.
“The synergy between a new component and a new application generated an explosive growth of both,” observed Moore’s longtime partner Robert Noyce, reflecting on how the transistor and computer grew up together. He made this comment in 1977, a few years before the personal computer began to stimulate yet another commercial explosion based on semiconductors. More than any other factor, the fantastic shrinkage of the transistor in both size and cost is what has allowed the average person to own and operate a computer that is far more powerful than anything the armed services or major corporations could afford a few decades ago. If we had instead had to rely on vacuum tubes, for example, the computing power of a Pentium chip would require a machine as big as the Pentagon.
And just this past year-which also happens to be the centennial of the electron’s discovery-there have been successful attempts to build transistors so small that they involve the transmission of only one electron through a channel less than 10 nanometers long. If this technology can ever be transferred to the production line, another hundredfold reduction in the size of transistors may be in the offing.
The saga of the invention of the transistor at Bell Labs is a fairly well-known tale that is often retold when questions arise about the importance of basic research in the innovation process. Much less familiar is the story of technology development that ensued. It was this rare combination of basic research and fundamental technology development that made modern transistors and microchips possible. Few, if any, episodes in the history of innovation can compare.
The Labs combined a pragmatic, goal-oriented research philosophy with what Shockley called “respect for the scientific aspects of practical problems.” Research was guided by the long-range goal of improving the components and services of the Bell System-better switches, clearer signals, etc. But within that context, scientists had ample freedom to do basic research on the properties of materials. Leading theoretical physicists worked shoulder to shoulder with first-rate experimenters and some of the best device-development engineers in the country. The invention and development of the transistor illustrates this interplay between the practical and the scientific that characterized Bell Labs in its heyday.
When Shockley’s original ideas for making a solid-state amplifier failed, for example, Bardeen proposed an entirely different theory of semiconductor behavior that he eventually published in the Physical Review. Shockley’s “field effect” approach involved the use of external electric fields to induce an excess of electrons near the surface of crystalline materials such as silicon; with more electrons congregating there, more current should flow. Or so he thought. To account for the apparent lack of any such effect, Bardeen proposed his theory of “surface states,” in which electrons become trapped on the surface and block electric fields from penetrating. This was a brand new starting point that reoriented the group’s research efforts toward understanding these troublesome states. “We abandoned the attempt to make an amplifying device,” recalled Shockley, “and concentrated on new experiments related to Bardeen’s surface states.”
When Brattain stumbled upon a crude way to overcome this blockage in November 1947, however, the group’s attention returned almost immediately to the practical goal of making a solid-state amplifier. A month later they invented the first transistor, the point-contact transistor, which had two strips of gold foil glued to the sides of a plastic wedge that pressed the foil edges into a germanium slab. Although this weird gizmo stretched nearly an inch, the novel physical process responsible for power gain occurred in a mere 2 mils-or 50 microns, about the thickness of a sheet of paper-of germanium between the metal points touching its surface. Positively charged quantum-mechanical entities known as “holes” generated beneath one point trickled along a surface layer to the other point, reducing the resistance of the material beneath it and thereby en-hancing the current flowing through it.
Under the enlightened man-agement of Mervin Kelly and Jack Morton, Bell Labs soon began to pour resources into developing technologies to make transistors commercially viable. It perfected methods of purifying germanium and silicon, and growing large crystals of these elements. Within a few years, these technologies permitted Shockley and colleagues to realize his idea of a junction transistor, which proved far more reliable than Bardeen and Brattain’s odd device and lent itself much more readily to mass production. In this kind of transistor, so-called p-n junctions replace the metal-to-semiconductor point contacts; these junctions are formed between two dissimilar layers of semiconductor material impregnated with different impurities to induce a slight excess of electrons or holes. This approach proved to be crucial in manufacturing the cheap, reliable transistors that began appearing in electrical devices such as radios and hearing aids during the 1950s.
What’s more, the Labs made these and other technologies readily available to firms that were eager to get into the semiconductor business. Combining them with a few additional innovations of their own, Noyce and Jack Kilby invented the integrated circuit at Fairchild Semiconductor and Texas Instruments toward the end of the decade. Better known today as microchips, which now incorporate millions of transistors on a single sliver of silicon, these circuits form the basis of today’s $150 billion semiconductor industry. As Morton observed, “Sometimes when you spread your bread on water, it comes back as angel’s food cake.”
Fifty years of materials science and engineering have collapsed the dimensions needed for the transistor effect to the submicron level. Germanium has been replaced by silicon, which behaves far better at high temperatures. Diffusion of micron-deep layers of impurity atoms into silicon and formation of a glassy, protective oxide layer upon it, photolithography for etching delicate features on the silicon surface, and vapor deposition of metal contacts on top of this glassy layer began to allow mass production of integrated circuits containing many transistors and other solid-state components.
Once Bell Labs finally brought Bardeen’s surface states under control in 1960, by formation of the oxide layer in a carefully controlled environment, Shockley’s original field-effect approach returned to the fore in the form of the metal-oxide-semiconductor (MOS) transistors that dominate the industry today. Here an electric field is applied through the insulating oxide layer by charging a tiny strip of metal deposited on its surface; this field governs the current flowing in the silicon just beneath. Small changes in the electric charge on the strip can have a huge impact on this current-sometimes even blocking it entirely.
In 1965 Moore observed that the number of individual components on integrated circuits was doubling every year. He extrapolated this exponential growth for another decade and came up with an astounding projection: that the circuits of 1975 would contain some 65,000 devices. Now enshrined as Moore’s Law, his prediction has continued to hold true for over three decades, though the doubling period has grown to about 18 months. The most advanced chips today contain millions of transistors-each with typical dimensions of less than half a micron. And photolithography techniques based on ultraviolet light promise a further size reduction to nearly a tenth of a micron, or 100 nanometers. Chips with billions of solid-state components may soon become a reality.
The crucial lesson to learn from the transistor episode is that basic research within the confines of a profit-motivated company led to a completely new and phenomenally valuable starting point for electronics. A close interplay between the practical and the scientific led to the discovery and rapid development of the physical process of transistor action, which could be so drastically miniaturized.
But postwar Bell Labs was a unique institution that would be very difficult-if not impossible-to replicate today. What Kelly described as an “institute of creative technology,” it concentrated the intellectual energies of half a dozen eventual Nobel laureates under the roof of a single industrial laboratory in New Jersey. However its parent firm, AT&T, was in a very special situation: it held a monopoly on telephone service throughout the United States. Therefore every time anyone placed a long-distance phone call, she was in effect paying a basic research and technology development tax to support the ongoing projects at the Labs. In return, many of the scientists and engineers working there considered themselves part of a “national resource” that had a responsibility to serve the national interest.
In today’s highly competitive business climate, most companies cannot afford research and development expenses that are unlikely to improve their profitability for years. Driven by profit pressures and 18-month product cycles, few corporations can afford to put together the multidisciplinary teams and allow them the broad research latitude that Bell Labs did with its solid-state group in the postwar years. And making their new technologies so freely available is absolutely unthinkable.
The federal government tries to help bridge the gap between science and industry by promoting technology transfer and advanced technology programs. But these are difficult propositions, fraught with severe problems and political disagreements. In today’s fragmented R&D environment, physicists at research universities and national laboratories continue to pursue imagined superstrings and leptoquarks that have no conceivable practical applications; meanwhile engineers at semiconductor firms focus on developing ways to etch ever finer features on silicon.
Partially because of this unfortunate dichotomy, innovations have difficulty reaching production. Recent breakthroughs such as fullerene nanostructures and high-temperature superconductors remain laboratory curiosities; compared to the transistor, which began to appear in hearing aids hardly five years after it was invented, these innovations are limping toward commercialization. A possible solution may lie within industry consortia-such as Austin’s Sematech-that are aimed principally at developing the deep pools of new technology their participating firms need to improve product lines. Basic research groups might be incorporated within such well-funded consortia. That way they would operate in the midst of a pragmatic environment that could also promote the fundamental development usually needed to turn scientific discoveries into useful products.
Another hopeful trend is that major companies such as Microsoft that have a comfortable share of-or a virtual monopoly on-their specific market are once again beginning to see the wisdom of investing in research. This is what occurred at Xerox’s Palo Alto Research Center during the 1970s and led to the development of such extremely useful information technologies as Ethernet, the mouse, and the graphical user interface. Under the leadership of Bill Gates and Nathan Myhrvold, Microsoft has recently taken a similar turn, devoting hundreds of millions to basic research and development projects in computer science. But I wonder just how much the firm will share its findings with other companies.
Whatever the case, it is important to recognize the true partnership that must exist between science and technology. “It’s not science becomes technology becomes products,” claims Moore in attacking the Bell Labs “linear model” of industrial development. “It’s technology that gets the science to come along behind it.” But the “science” he refers to is the narrowly applied science done in most of industry today-from which few, if any, radically new innovations and points of departure will ever emerge. Science and technology are like the two intertwined polypeptide chains in a DNA molecule. Each influences the other in a complicated, symbiotic relationship that would be greatly diminished if either one became the other’s handmaiden.
My central point is that we need to overcome the fragmented nature of today’s R&D enterprise. What characterized postwar Bell Labs and led to the invention and development of the transistor was that the full array of talents necessary for revolutionary innovation was to be found under a single roof, working closely together as a well-oiled unit under an enlightened management that understood how such multidisciplinary teams had developed radar and the atomic bomb during World War II. I hope that we will not need another such cataclysm to remind us once again of the value of cooperative research and development.