More Bits Isn't Always Better

Computer manufacturers love to tout the numbers as a marketing point. They brag about the number of gigahertz in their CPU, the number of gigabytes (and now terabytes) in their storage, the number of pixels in their display. Chips which are 64 bit have been commonplace for over a decade now - quite a long time in computing - so why aren't 128 bit chips more commonplace?

But first, what does it really mean for a computer to have a certain number of bits anyway?

When people refer to a machine being 64 bit, they are usually referring to the main general purpose processor. A modern computer or mobile device is actually a complex of many computer chips, with different processors for distinct tasks. The most well known of the specialized processors is the "graphics processing unit" or GPU, but there are many others including networking chips, sound chips, storage chips, and pretty much any device that needs to be managed by a controller. These chips can all have their own bitness and speed as well. But for this discussion, "64 bit machine" will mean the main processor is 64 bit.

To get a sense of why having more bits is good, let's use an everyday analogy. We humans reason using decimal digits ranging from 1 to 10, so let's use those instead of binary digits.

Let's imagine that you were a 1 digit human, so you could only keep track of single digit numbers in your head. Let's also imagine that you can keep track of four of these single digit numbers at a time (typical computers can keep track of a couple dozen of numbers at once). Once you overflow what you can remember in your head, you have to start writing these numbers out to pieces of paper so that you can recall them later. For those that know a bit about computer hardware, these two components would be roughly analogous to the "CPU registers" and "RAM".

Now let's see how you would do some commonplace operations.

Suppose you're at the store and you want to buy something. As long as it costs under ten dollars, you can remember the price of it using a single number. If it exceeds ten dollars, you have a problem. What are your options?

You could use one of the other available number memory to remember the tens digit. That would allow you to price things between 0 to 99 dollars. That doesn't sound too bad, as long as you don't have a bunch of other things to keep track of. You only have four memory slots, and you probably want at least one other slot to remember how much you've paid... and another slot to compute the change to make sure you've gotten the right money back... so that's probably the maximum you can keep track of. You wouldn't even be able to remember how many items you purchased. Although in this case, since you've already calculated the price, maybe it doesn't matter.

You could just never buy anything that costs more than 10 dollars. If you needed several of a small item, you could repeat the transaction over and over, each time buying it once, without exceeding the cost you can keep track of. If you were as fast as a computer, this actually wouldn't be that crazy of an idea. Of course this means you can't buy any single item that's expensive, or you'd have to ask the store to bill you in installments, each of which was an amount you could keep track of.

What about an even simpler task: counting?

Let's say you were trying to keep track of the people you know - yes, that's right, a human address book. Well, once you exceeded 10 people, you would have to play some tricks like the above to continue adding more people. Let's also hope that none of your friends has a name with more than 10 letters.

This counting limit also sets a limit on what you can store outside your head. Previously we had said that once you start running out of room, you would have to start writing things down on a piece of paper so that you could remember them later. But not only do you need to write out the data onto that piece of paper, you need to record which piece of paper you wrote it down on. For example, you would have to record, this person's birthday is on paper number 2. This other person's photo data is on paper number 8. But with one digit, we can only track ten pieces of paper at once. Computers actually use these kinds of references a lot to track how all their data is related.

Advanced programmers know that none of these problems are insurmountable, but they are a major headache to deal with (and back when 16 bit machines were common, many programmers really did have to do a lot of wizardry to work around some of these problems). All these workarounds make the programming process slower and more error prone, and make the end result run slower as well.

But although more bits allows you to do bigger computations and track more data at once, it does come with some serious drawbacks.

First, each register in the CPU needs twice as much circuitry to contain twice as many bits. To keep the rest of the system efficient, we would also want to expand all the connections throughout the machine. You wouldn't want a 64 bit CPU and a 64 bit GPU communicating together through a smaller 32 bit connection. Depending on what else you're doing with the chip, this might not be a serious concern; you might have more transistors than you know what do with. On the other hand, for a seriously constrained system, you may find yourself dealing with power or heat dissipation issues.

Another serious problem is that all references use up more memory. Where a one digit person might say that some data is on paper number 7, a four digit person would have to say that some data is on paper number 0007. It is possible to use special schemes to pack these references and try to conserve some memory, but this means that every time you needed to follow a reference, you would need to unpack that number, and this eats into the performance you would have gained by increasing the bit size.

Could you have the best of both worlds? If you don't need the extra memory, couldn't you get the best of both worlds by doing 64 bit arithmetic, but only using 32 bit addressing? Roughly speaking, yes, and there are plenty of applications for this. For example, if you were doing some image or audio processing, you might have images or sound clips that comfortably fit into 32 bits (that's 4 gigabytes), so you only need 32 bits of addressing, but you might still want to arithmetic 64 bits at a time for speed. Computer makers also hit upon this same idea, so when different systems moved from 32 bit to 64 bit, some systems chose to make both arithmetic and addressing 64 bit. Some chose to make only arithmetic 64 bit. Thus was created a whole new set of headaches for programmers as they had to juggle around different sizes on different machines.

So when do we move to 128 bits?

Probably not soon. For one thing, most 64 bit chips already have special instructions to do arithmetic on chunks bigger than 64 bits. These instructions take a little bit of time to set up, but run pretty fast once the pipeline gets going. So if you want to process an image with millions of pixels, you can get most of the benefit already of processing more bits at a time.

And what about references? For fun, here are some scales of different sizes:

Unless you're some major corporation or government agency trying to store data on a global scale, you probably don't need more than 64 bits of addressing. For now, 16 exabytes ought to be enough for anybody (for personal computing).


by Yosen
I'm a software developer for both mobile and web programming. Also: tech consultant for fiction, writer, pop culture aficionado, STEM education, music and digital arts.