The 64-Bit Question
When computers improved from 16 bits to 32, they became vastly more powerful and useful. But the advance to 64 bits may prove beneficial more to computer marketers than to users.
Is a 64-bit computer in your future?
With all of the hype surrounding 64-bit processors, you probably assume that my answer would be an unequivocal “yes-and pretty darn soon, too!” But put aside the marketing bluster about chips like AMD’s Athlon64; in fact, having 64 bits matters a whole lot less than the computer industry would have you think. Indeed, unless you happen to be a Macintosh user, you might not find yourself buying a 64-bit computer for another decade-if, in fact, you ever buy one at all.
First, a little background. The processors in the vast majority of today’s desktop and laptop computers are 32-bit chips. Most of them, are based on Intel’s incredibly successful IA32 architecture, also known as the x86 (as in 286, 386, 486). Intel’s Celeron and Pentium machines are all IA32, as are AMD’s Athlon chips.
But all of a sudden, 64-bit machines have a kind of cachet. For two years, AMD has been selling processors that can run both 32-bit and 64-bit code at the same time; computers built with these chips can run either Linux or a special 64-bit version of Windows XP that Microsoft released earlier this year. Apple, meanwhile, ships all of its Power Mac computers with the G5 microprocessor, a 64-bit brain created by IBM. And in a way, all of these desktop systems are playing catch-up: Nintendo made the 64-bit transition in 1996 when it shipped its Nintendo64 gaming console.
To understand why all this matters, you first need to understand that the phrase “32 bits” is a kind of shorthand that computer designers use. This number refers to two things inside the computer’s architecture. First, it signifies how many bits these computers use when they specify the location in memory where a piece of information is stored. Second, it indicates the size of the registers inside the microprocessor that are used to do math. Each bit can be a 1 or a 0, so 32 bits can be used to represent 232 or 4,294,967,296 different values. Thus, the obvious difference between 32-bit machines and 64-bit ones is that the 64-bit systems are much bigger machines: they can address more memory, and they have can do math with bigger numbers.
But more doesn’t necessarily mean better-it depends on what you are getting more of.
Down Memory Lane
The move from 32 bits to 64 bits matters most when it comes to these computers’ ability to address memory. A program running on a 32-bit computer can easily address 4 gigabytes of memory-recall, 232 is roughly 4.3 billion. On the other hand, a program running on a 64-bit machine can address 264-that’s 4 billion times 4 billion bytes, an astonishingly large number. Just do the numbers and it’s clear that there is a lot more “headroom” on a 64-bit system. But these two facts are actually responsible for a lot of confusion, as we will see.
IBM’s original personal computer used the Intel 8088 microprocessor-a funny little chip that was filled with weird engineering compromises. At its heart, the 8088 was a 16-bit processor: it had 16-bit math registers, allowing it to easily represent numbers between 0 and 65,535 (or between -32,768 and 32,767), and 16-bit address registers, allowing it to easily communicate with 64 kilobytes of main memory. Now, 64K wasn’t enough to do much of anything, even in 1981 when the PC first shipped, so the 8088 had a set of segment registers that were shifted to the left 4 bits and added to the address register before memory addresses were actually read or written. As a result, the 8088 could easily access up to one megabyte of memory. A megabyte was a lot of RAM back in 1981. Indeed, computer designers back then couldn’t imagine that a typical home or business user would need that much memory, let alone be able to afford it, for many years to come. So IBM’s designers drew a line across the computer’s memory map and put the memory for the video display right in the middle of the upper half, effectively limiting the early PCs to no more than 640 kilobytes of RAM. This was the genesis of the 640K limit that was imposed by IBM’s computer on its DOS operating system.
A few years later, Intel introduced its next microprocessor, the 80286. (The 80186 never really made it into personal computers.) The 286 was the basis of IBM’s PC/AT. It had an emulation mode (called “real mode”) that let the 286 run the same software as the 8088, but it also had an advanced, “protected” mode that let it run with up to 16 megabytes of RAM. The vast majority of these machines were operated in real mode so that they could run Microsoft’s DOS and all of the other programs that were written for the original IBM PC. Indeed, the 286 was far more popular at running 8088 software than the 8088 ever was, because the 286 was so much faster. When you get right down to it, very few 286 chips were actually run in their “protected” mode.
In 1985 Intel introduced the 80386 chip-the first 32-bit processor in the x86 family. Once again this microprocessor had a so-called “real mode” so that it could run DOS and the rest of 8088 software base. These machines could circles around the original 8088-not because they were 32-bit machines, but because they had faster clock rates and a more sophisticated internal design. There were also a number of companies selling “DOS extenders” that let programs loaded under DOS take advantage of the full 32-bit address space. These extenders flipped the computer into 32-bit mode for math, but returned the machine to 16-bit mode whenever the program needed to access the computer’s hard drive. Nevertheless, 32-bit programs running on these 32-bit processors was the exception, not the rule.
It wasn’t until the 32-bit machines in the field vastly outnumbered the 16-bit ones that Microsoft starting shipping its first real 32-bit operating system-Windows 95. By that time Intel had brought out two more generations of x86-based machines-the 80486, and the Pentium. Yes, Microsoft could have delivered a 32-bit operating system years before Windows 95 shipped. But doing so would probably have been a mistake: why sell an operating system that won’t run on the majority of PCs out in the marketplace?
All of this history is suddenly relevant once again as we consider the next big jump in PC architecture-the shift from 32-bit to 64-bit computing. But while the payoff moving from a 16-bit address space (or 20-bits, if you consider the 8088’s segmented architecture) to 32 bits was huge, the move from 32 bits to 64 bits will barely be noticed by most computer users. The reason is that 32 bits is actually large enough to solve the vast majority of computing tasks-not just today’s, but also tomorrow’s.
The move from 32 bits to 64 is unlikely to bring the same sort of quantum jump in speed or capabilities that we got moving from 16 bits to 32. Yes, 64 bits of address is truly humungous, but 32 bits is nothing to sneeze at.
Today there are few applications that really need more than 4 gigabytes of memory. If what you are doing is word processing, spreadsheets, e-mail, and Web browsing, 32 bits is going to provide enough address space for the conceivable future. My Windows desktop computer is a memory hog-its copy of Internet Explorer routinely bloats up to 64 megabytes. But that’s still one sixty-fourth the size of the machine’s 4 gigabyte memory map. I can’t imagine that I could run a Web browser that would require a 4 gigabyte memory map: it would take nearly 10 hours just to download that much information over my DSL line!
You might think that multitasking with other, similarly oversized applications would cause ever-increasing memory pressure, to the point where one does become concerned about address space usage. But that’s not the case. Windows, Unix, and other modern operating systems use a technique called virtual memory to give each program its own isolated memory map. On a 32-bit computer this means that every running program gets its own 4 gigabytes of virtual memory to play around with. So while a single instance of a running program can’t access more than 4 gigabytes, a 32-bit machine running Windows XP with 10 or 20 gigabytes of memory would have no trouble sharing that memory between a bloated browser, a bloated copy of Word 2003, and a bloated copy of Access.
Where that 64-bit address space makes the big difference is when a single program needs to access more than 4 gigabytes of memory at once. For example, if you are running a data warehouse for a multinational corporation with 10 terabytes of online storage, your database server might seriously benefit by having 10 or 20 gigabytes of index files stored in memory. A large-scale simulation could similarly benefit by having lots of RAM at the disposal for doing things like modeling the weather for the day after tomorrow.
With companies like Dell shipping home computers with 512 megabytes of RAM, and Windows XP computers routinely using 1.5 gigabytes of memory to hold all of their programs, the marketers pushing 64-bit computing are going to be saying that you need a 64-bit machine to break through the quickly approaching 4-gigabyte limit. Don’t believe it. In fact, Dell already sells 32-bit computers with 8, 16, and 32 gigabytes of RAM. The marketers want you to buy 64-bit machines because these systems cost more.
The other way that 64-bit machines surpass today’s 32-bit systems is when it comes to doing math. Whereas today’s 32-bit machines have processors that can represent any integer between 0 and 4,294,967,295 (that’s 232-1), a 64-bit machine can represent integers between 0 and 18,446,744,073,709,551,615 (264-1).
Once again, being able to do math with these huge numbers in a single instruction can be an enormous advantage in a small number of scientific applications. But it turns out that for most day-to-day office tasks 64-bit integer math isn’t all that useful. For starters, that’s because we already have machines that can do 64-bit: today’s machines just do it with special-purpose floating point processing units, or else they do it with multiple 32-bit instructions. For most operations, special 64-bit math hardware is simply not needed.
You don’t need to take my word on this. Just look at the history of other 64-bit architectures. While 64 bits is new to the world of x86, other microprocessors made the transition to 64 bits back in the 1990s. The Alpha, MIPS64, and Sparc64 are all 64-bit machines. Yet most of the programs running on these computers effectively ignore the top 32-bits of every number-that’s because those digits are invariably 0.
The Real 64-bit Payoff: Newer Designs
All of these arguments against 64-bit machines melt into the woodwork, however, when you sit down in front of Apple’s new G5 computer: no matter whether you editing video or simply browsing the Web, the machine feels dramatically faster than it’s 32-bit G4 cousins. So what gives?
With the notable exception of Intel’s Itanium processor, today’s 64-bit machines generally run 32-bit code faster than their 32-bit cousins for the same reason that the 32-bit Intel 80386 ran 16-bit code faster than the 8088 and the 80286. The reason is that the 64-bit CPUs are simply more modern devices. These chips are made with more advanced silicon processes, they have higher clock rates, and they pack more transistors. AMD’s Athlon64 and IBM’s G5 don’t just have wider registers: they also have more functional units inside their silicon brains. These chips do a better job at things like executing multiple instructions at the same time, out-of-order execution, and branch prediction. That 64-bit PowerMac G5 running in the Apple Store is largely running 32-bit code. The machine’s impressive speed comes from the combination of two processors, the faster clock rates, a bigger cache, and a better memory bus.
Yes, AMD and IBM could have put that same technology into a new 32-bit design. But these days, designing a new chip costs billions of dollars. A 64-bit processor can command a higher price than a 32-bit CPU, so it is in the best interest of these companies to put their latest-and-greatest technology into their 64-bit products.
Looking forward, 64-bit computing will really catch on because the 64-bit machines will just happen to do a better job running today’s 32-bit code than today’s 32-bit processors. But the market could easily evolve in another direction. Those extra 32 bits consume a lot of power, so companies building CPUs for laptops and handhelds might simply fold the tricks developed for 64-bit machines into their 32-bit devices.
The same thing has happened in game consoles. Although there was a lot of excitement a few years ago when Nintendo decided to use the 64-bit R4300i processor for its Nintendo 64 system, video game players didn’t really benefit from the extra 32 bits of address or math. The R4300i was a fast chip at the time because it implemented a lot of other state-of-the-art techniques for speeding program execution. It could have provided the same level of performance if those tricks had been applied to a 32-bit processor. It was the tricks that brought the speed, not the bits.
Having lived through the jump from 8 to 16 bits, then 16 to 32, and now 32 to 64, it’s only natural to think that sometime in the distant future we’ll be making the transition from 64-bit to 128-bit systems. Don’t hold your breath.
The important thing to remember here is that bits are exponential. A 32-bit system can address 65 thousand times as much memory as a 16-bit system, while a 64-bit system has a theoretical memory address space 4 billion times larger than that of a 32-bit system. You could actually build a single memory system that would hold 264 bytes of storage with today’s hardware-but you would need to use more than 200 million hard drives, each one holding 256 gigabytes of information. That’s more storage than was delivered by the entire world’s hard drive industry in 2003. So while it’s conceivable that you could build a memory system holding 264 bytes of storage today, you would probably have to use every computer in the world that’s connected to the Internet.
Although it’s possible to envision a future where computers will access 264 -byte databases, it’s hard to conceive of a single problem that would require a program to have that much memory accessible in a single address space. One reason why such an incredibly big system doesn’t make sense is that you wouldn’t build such a system with a single processor and a single unified address space: you would instead use millions or billions of processing elements, all with overlapping memory and responsibility. That way, if one processor or block of memory failed, the other systems would take over seamlessly.
Given such arguments, it’s pretty unreasonable to imagine that you would need 2128 bits of storage-not in our lifetime, not in anybody’s lifetime.
On the other hand, I could be completely wrong about all this: 64 bits could be just the thing for doing full-body virtual reality with mind-meets-mind-morphing capabilities. Or more likely, companies like Dell might choose to follow Apple’s lead and stop selling low-end machines with 32-bit processors, instead relying on the marketing hype of 64-bit machines to justify the higher profit margins.
But remember, there’s always room at the bottom. And since 32-bit machines are likely to be useful for at least a decade to come, if not longer, I would be surprised to see Dell cede this market to another company. Just look at Apple: while all of the PowerMac desktop machines that Apples sells come with G5 processors, the company is still using G4s in its iMac, eMac, and PowerBook computers.
Personally, I think that 32-bit systems will be with us for a long time to come.
Become an MIT Technology Review Insider for in-depth analysis and unparalleled perspective.Subscribe today