Difference between revisions of "64 bit"

From iGeek
Jump to: navigation, search
 
Line 1: Line 1:
 
<includeonly>[[Image:bits.png| 96px|right|]]</includeonly><noinclude>[[Image: bits.png| 180px|right]]</noinclude>Bits of bits... how many bits should my computer be and why should I care? If 16 is good, then 32 must be twice as good. And if 32 is good, then 64 has to be great. The truths of engineering aren't that clear. If you need 64 bits, it's great.... but much of the time, you need much more, or less.  
 
<includeonly>[[Image:bits.png| 96px|right|]]</includeonly><noinclude>[[Image: bits.png| 180px|right]]</noinclude>Bits of bits... how many bits should my computer be and why should I care? If 16 is good, then 32 must be twice as good. And if 32 is good, then 64 has to be great. The truths of engineering aren't that clear. If you need 64 bits, it's great.... but much of the time, you need much more, or less.  
 
<noinclude>
 
<noinclude>
 
 
== So what is 64 bits? ==
 
== So what is 64 bits? ==
  
Line 21: Line 20:
  
 
== How many bits of data ==
 
== How many bits of data ==
 +
When we used to ask how many bits of data a processor worked with, it was easy. There was one unit, and it always worked in that amount of bits.
  
When we used to ask how many bits of data a processor worked with, it was easy. There was one unit, and it always worked in that amount of bits. Now days there are 3 primary ALU's (arithmetic units), and each works on a different size. For large data chunks or arrays of low-resolution items, you use the vector unit (AltiVec). If you need to just have a simple counter or work with integers, then the Integer unit executes the instructions (32 bits). And when you need larger or more accurate 64 bit floating point numbers, then the Floating-point unit handles the instructions. So depending on what you are doing, will result in the size of data you are working with. In data size, the PPC is already 128 bits for at least some things.  
+
Now days there are 3 primary ALU's (arithmetic units), and each works on a different sizes:
 +
* Integer units are for smaller sized stuff
 +
* Floating point units are for higher resolution math
 +
* Vector units are even larger registers (128 or 256 bits) for doing the same thing to multiple smaller sized things (often many 8 or 16 bit chunks at the same time). Great for managing pixels, or characters.
 +
* GPU is like a vector unit on steroids: it can have hundreds of processors paired together, that all do the same thing to multiple smaller data chunks at the same time. Great for graphics.  
  
A few things are changing. One is that after 40 years, we're finally moving away from 8 bits to represent a character of text. Instead we're using 16 bits (and Unicode). This allows for a character to represent not only any roman character, or a character in one language; but it allows for each character to be unique in any language in the world; and even beyond. Some were petitioning for Klingon and Elvish to add science fiction and fantasy character sets; for those of us who speak and need to type in elvish. This added width requirement will either make computer less efficient in speed, nominally, and a more inefficient in storage. But wider computers will help compensate. However, Unicode is a very minor motivation. The real reasons for moving are more tangible.
+
This is all great for the data fidelity -- but we also wanted to deal with more data as well: how much the computer could address (or see/access at one time).  
 
 
While 32 bit integers are good enough for 99% of computing math -- things are easier to do in your computer if your processors integers match at least the width (the total size) of their addresses. This is so the computer can do math easily on its own addresses, and see anywhere in it's memory at one flat space. Since computers are getting more memory, i.e. wider addresses, and going beyond 32 bits (or a mere 4 gigabytes), why not give them the full next power of two, or 64 bits of address space. This isn't a huge win for data speed. In fact, since addresses now take twice as much memory to remember, it is actually a minor loss for some things. But it is the biggest motivation for going to 64 bit addresses.  
 
  
 
== How many bits of address ==
 
== How many bits of address ==
 +
       
 +
Now a computer has an address for each and every memory location. 8 bits of address, mean that your computer can address 256 addresses (locations) - usually these were each one byte long, but in theory, they could be as wide as the computer needed them to be.
  
       
+
256 addresses isn't much -- so even 8 bit computers would often work with 16 bits of address to enable them to work with 65,536 bytes (or address 64K of memory). You'd be surprised what we could do with computers back then, even with that little memory. (The 70s were 16 bit addressing... mostly). Now the little controller in your mouse is more powerful than the 1970s computers that I started programming on.  
Now a computer has an address for each and every memory location. 8 bits of address, mean that your computer can address 256 addresses (locations) - usually these were each one byte long, but in theory, they could be as wide as the computer needed them to be. 256 addresses isn't much -- so even 8 bit computers would often work with 16 bits of address to enable them to work with 65,536 bytes (or address 64K of memory). You'd be surprised what we could do with computers back then, even with that little memory. Now my controller in my mouse is more powerful than the computers I started programming on.  
 
  
Now days we have 32 bit addresses -- and a 32 bit address, can deal with 4 Billion addresses (4 Gigabytes of memory). 32 bit addresses have been standard for quite some time, and will be for a while. But we are starting to get to the point where 4 gigabytes of RAM isn't that much. For some large databases or large 3D or math problems, 4 Billion locations is very small. Now most of us aren't mapping the human genome on our home computers; so it isn't like we're all bumping our heads daily. But it is getting to the point where video and graphics work especially could use more space. So we want to make room for it now. And so designers are looking at jumping to 64 bits, or roughly 16 exobytes of memory to prepare for the future.  
+
32 bit addresses caught on in the mid 80s and were popular a lot longer. A 32 bit address, can deal with 4 Billion addresses (4 Gigabytes of memory). 32 bit addresses have been standard for quite some time, and will be for a while. But we are starting to get to the point where 4 gigabytes of RAM isn't that much. For some large databases or large 3D or math problems, 4 Billion locations is very small. Now most of us aren't mapping the human genome on our home computers; so it isn't like we're all bumping our heads daily. But it is getting to the point where video and graphics work especially could use more space. So we want to make room for it now. And so designers are looking at jumping to 64 bits, or roughly 16 exabytes of memory to prepare for the future.  
  
{{T1 | Now an exobyte is a quintillion memory locations: or 1,000,000,000,000,000,000, or enough memory to track every cell of every person on the face of the earth. So we shouldn't bump our heads on that limit any time soon. The naming goes, mega (million), giga (billion), tera (trillion), peta (quadrillion), exo (quintillion). While these are the popular terms, they are not technically correct terms. Peta means quadrillion -- but petabyte roughly means 2^50, instead of 10^16, which is really 1.1259 quadrillion. And Exobytes is worse, it means 2^60, instead of 10^19, or 1.15... quintillion. So actually, 16 exobytes is about 18.5 quintillion bytes of memory, but what's an extra 2.5 quintillion bytes of memory among friends?}}
+
{{T1 | Now an exabyte is a quintillion memory locations: or 1,000,000,000,000,000,000, or enough memory to track every cell of every person on the face of the earth. So we shouldn't bump our heads on that limit any time soon. The naming goes, mega (million), giga (billion), tera (trillion), peta (quadrillion), exa (quintillion).}}
 
          
 
          
Anyway, 64 bits of addressing is a heck of a lot of memory, and as I said, 32 bits is good enough for most users today (and for the next 4 or 5 years or so). So going from 32 to 64 bit addressing, isn't a huge win for the average user. For a few problems the extra address space helps; but not as much as you might think.  The common hack (fix), is that the computer just have pages of 4 gigabyte chunks, and flips around to which very large page that it is looking at a time. And there are other techniques like segmenting, branch islands, relative addressing and so on, that can all work around addressing more memory than fits in one page at a time. So it is easy to address more than 4 gigabytes on a 32 bit computer; but just as you might find it annoying and slow to flip back and forth between two pages to compare something, or remember to use the 4000th location from this one, the computer and many programs aren't exactly thrilled with it; they prefer one big page. And given the choice, flatter is better.
+
For a few problems the extra address space helps; but not as much as you might think. 64 bits of addressing is a heck of a lot of memory, and as I said, 32 bits is good enough for most users today (and for the next 4 or 5 years or so). So going from 32 to 64 bit addressing, isn't a huge win for the average user, most of the time. And it comes with a cost: if you have to double the size of every address, everything gets bigger. (It takes more memory, and has to move more stuff around to do the same job).  
  
{{T1 |Before you tell me how old Intel processors with 64K pages, and 640K limits sucked, remember that was because DOS and Windows didn't handle it well. Other compilers, languages, OS's and processors handled these paging issues much better. So the problems werenít all fundamental, most were just with the particular implementations. </blockquote>
+
The common work-around for 32 bit computers is that the computer just had many pages of 4 gigabyte chunks, and flips around to which 4GB page that it is looking at a time. This only rarely cost much in overhead (for paging around), so 32 bit addresses lasted for 30+ years, and even with 64 bit (or larger) computers, many will stick with smaller (32 or 40 bit addressing).  
  
 
== How many bits can move at once ==
 
== How many bits can move at once ==
  
{{T1|In the late 1990's, there was a new game in town; it was short-throw vector processing, or single-instruction multiple-data (SIMD). This is known by the names of AltiVec on the PPC, or MMX and SSE (or 3Dnow) on x86 processors. The concept of vector processing goes back to the super computers, like the Cray-1, of the 1980's, and even before. The biggest change is that SIMD breaks a long data chunk  (128-256 bits) into many smaller pieces (1, 4, 8, 16, 32 or 64 bit parts); or in other words, a single instruction can work on multiple data elements at once. So instead of the 128 bit AltiVec unit being just one big register, it can behave like 16 individual 8 bit computers, or as 8 individual 16 bit computers, or as 4 individual 32 bit computers, or even behave like many 1 bit computers, or a single 128 or 256 bit computer. It is very versatile. So it isn't just bigger, it is better, for some things.}}
+
While the computers are x# of bits, sometimes they talk to memory (or peripherals) on smaller or larger buses (connections). Obviously being the same width as the processor is good. But since processors are faster than memory, what if it could load 2 things at once? That would keep the processor fed better. A few designs did this, but the cost of all these connections on a bus is expensive and hard to do over distances (even as small as inside a computer case), so often the bus is less width than the CPU. While internal to the CPU when it's connecting one part to another, they can run things much wider. But there is a balancing act in design, and between all the sizes in your system. And if you make one part that is 10 or 100 times faster than the rest, it is just wasted potential, because it sits and waits for the other part to catch up all the time.  
 
+
   
 
 
 
 
       
 
Now ironically, just because a computer (processor) works certain sized data (in registers), or has a certain size address space, does not mean that it move around that much data at one time. Computers have different areas that are different widths. There is the size of the internal registers, the size of the math it can do at once (ALU - Arithmetic Logical Unit), the size of the cache (width), and the size of the bus (channel / pipe from the cache to the memory).
 
 
 
Going from the processor to memory is the bus (or memory bus) may be different than going from the processor to the internal cache. We used to care about the processor to main memory the most (before cache) -- but now days, 95% of the time (or more) when the processor is accessing something, it is getting it from the cache. So the cache size is more important, right? Not as much as you might think, because main memory is up to 10 times slower. So there is a balancing act in design, and between all the sizes in your system. And if you make one part that is 10 or 100 times faster than the rest, it is just wasted potential, because it sits and waits for the other part to catch up all the time.  
 
 
 
{{T1 |Interestingly, the PowerPC (called a 32 bit processor) has a 64 bit bus. If you go off-chip, it moves 64 bits at the same time. Even in integers, it has to move 64 bits to the processor (from memory), even if it only needs to see 32 bits of what it loads. So by bus width, it is a 64 bit processor. The 68000 was a 32 bit computer that worked with 32 bits internally, had a 32 bit ALU, and had 32 bit registers -- but it only had a 16 bit bus. While the Intel 8088 was an 8 bit computer that could pair registers (to pretend to be 16 bits) and had an 8 bit bus, and an 16 bit ALU. So the press and PC-advocates called both the 8088 and the 68000, 16 bit computers; even when the 68000 was often four times the computer that the 8088 was. Pro-Intel bias is nothing new.}}
 
       
 
Internally, processors are changing version to version. Some have 64, 128 or 256 bit wide internal paths, or sometimes they are wider to cache than they are to their memory bus. This is mainly because the cost of wider memory is significant. This is also a game with some of Intel's memory (like RDRAM), which is faster but not as wide; so it has to be faster just break even. There are lots of games with what constitutes processor width.
 
 
 
Another internal channel is for doing math: the ALU. The rest of the processor is basically for moving things around and doing simpler instructions (loops, branches, conditionals, etc.), but the ALU is where you crunch numbers. The G3 has a 32 bit ALU, while most floating-point instructions are 64 bits, so it takes two passes (twice as long). The G4 has a full 64 bit ALU, so it takes a single cycle -- and in fact, the G4 also has another 128 bit ALU to do vector instructions (and it can do so at the same time it is doing integer and floating point). The Pentiums have a 64 bit ALU (mostly), but for some things it is as slow (or slower) than the 32 bit ALU in the G3. Then most Pentiums didn't have a full 128 bit ALU, so were never as good for vectors. AMD has a better ALU than the Pentiums, and that is one of the reasons why their processors are faster at the same speed. So it can get pretty complex pretty fast, and it isnít just about bits, but about how well the processor is designed.
 
 
 
So you can see that there are a lot of little areas where data is moving around -- and they are all different sizes for different reasons.
 
       
 
 
 
== What about compatibility? ==
 
 
 
Basically, it isn't that hard to make complete predictions. But the basics are that most new ì64 bitî processors, will have older ì32 bit modesî. Applications and operating systems that use those modes wonít be able to take advantage of more than 4 Gigabytes of memory at once, but once again, this is not a huge deal for most Applications.
 
 
 
In the PowerPC camp, there is backwards support in the hardware. And there are a few ways to make the OS and software compatible. All in all, it is not a huge effort; but changes to the OS take time. There are also a few ways Apple can do it. The easiest way is to make the OS have old 32 pages for most apps, and a ìnewî 64 bit clean way to compile new apps, then just let programs evolve in time. Or there is the change the entire OS at once, and make sure everything (including all the OSís APIís) can work in flat 64 bits; but that is generally harder and more time consuming. Apple could also be their ìnewî annoying self, and make one set of APIís 64 bit clean (like Cocoa), and not the other (Carbon), as a way to try to force people into the API that they want. But I donít think that ìmigration at gunpointî is looked on favorably by most customers and developers, nor do I think that it would be a good idea.
 
 
 
Since I donít know which approach Apple is going to take, it is hard to guestimate how much time it will take. I think Apple knew that 64 bits was coming, or should have, when doing OS X, so it shouldnít be a huge effort. But things were rushed, QA at new Apple wasnít like the old Apple, and they might have been sloppy. So it might take longer. But in general, going to 64 bit is less effort (by far) than going from OS 9 to OS X.
 
 
 
In the Intel camp, Intel is trying to jump to 64 bit with the Itaniums, by changing the entire instruction set and design, and make it more like the PowerPC. (More modern in instruction set, and more RISC like, with some post-RISC design elements). And this requires whole new variants of the OS to work. So far, this strategy has been flopping. PCís have always been about cheap and backwards compatible, not about good or well designed, or accepting big changes. Every effort to fix major design flaws, has flopped, and usually more conservative efforts have succeeded. Intel is relearning this the hard way.
 
 
 
AMD is taking a much better approach, with a 32/64 bit hybrid chip with backwards compatibility. And OSís and Apps will probably take a more evolutionary approach as well. I suspect that AMD is going to win that game, right up until Intel copies AMD, and makes a few things incompatible, and the PC market follows Intel.
 
 
 
 
== Conclusion ==
 
== Conclusion ==
  
There is a murphy's law of communication (or should be) -- that no matter which way <B>you</B> mean something, others will assume you mean it a different way. And when talking about size, you could mean data size, path size (bus or internal), or address size. Generally, when weíre talking now days about chip size (how many bits), we mean is it a full 64 bit, non-paged, address and integer (data) support. Since I already have 64, 128 or 256 but support for other things, thatís about the only thing left that is that small. But remember, for most work, I donít care about address space, I care about data size and speed; and weíre already there. 
+
There is a murphy's law of communication (or should be) -- that no matter which way '''you''' mean something, others will assume you mean it a different way. And when talking about size, you could mean data size, path size (bus or internal), or address size. Generally, when weíre talking now days about chip size (how many bits), we mean is it a full 64 bit, non-paged, address and integer (data) support. Since it already has 64, 128 or 256 but support for other things.
 
 
Will 64 bits matter? For most users, it will matter very little. Since moving 64 bits around (addresses) will slow things down (and increase the space they take), there could be a minor performance and memory efficiency loss; however I think other design improvements in the chips that offer 64 bit support, will more than make up for the bulkier addresses and data creep (wasted space). So weíll get better performance, and they will be better chips, but almost none of that will be because they have 64 bit addresses and integer support.
 
  
There are definite areas where people will care about the larger address space. Large graphics, audio and publishing solutions, while not bumping their heads right now (very often), are starting to get close. Certainly large video, 3D and database solutions could use the full 64 bit support. So the big thing that 64 bit addressing does is buy us headroom for the future.  
+
Mostly, computers need to be balanced between how fast the processor is, with how fast the memory is, or the program you're using needs to be. More than that just wastes battery or something else... so there are reasons that computers have been 64 bits (mostly) for the last couple decades, and will likely remain so for a lot longer: it fits the problems we're doing. There are special units for special functions that work a lot larger -- but they're special units because most of the time they are not needed. So I think of it like a range extender on an electric car: great when you need it, but just something extra to haul around when you don't.
  
The only constant is change. Iím sure some day, weíll be talking about that annoying 64 bit address space limit, and making the jump to a full 128 or 256 bit computers. But each transition has lasted us longer and longer, because there are fewer and fewer things that a computer canít do with the speed and memory that they can address. So unless thereís some huge surprise, that takes a ton of memory (like home genome sequencers, etc.), then I expect that transition will be many years if not decades away. [ ]
+
{{Footer| written= 2002.10.14 |edited=2019.06.30| }}
[[OWNER]]|78|[[/OWNER]]
+
[[Category:Hardware]][[Category:Tech]]
[[CREATED]]2002-10-14 08:20:01[[/CREATED]]
 
{{Footer| written=|edited=| }}
 
[[Category:Programming]][[Category:Tech]]
 
 
</noinclude>
 
</noinclude>

Latest revision as of 10:08, 1 July 2019

Bits.png

Bits of bits... how many bits should my computer be and why should I care? If 16 is good, then 32 must be twice as good. And if 32 is good, then 64 has to be great. The truths of engineering aren't that clear. If you need 64 bits, it's great.... but much of the time, you need much more, or less.

So what is 64 bits?

There are three ways to measure a processors "size";

  • How many bits of data a processor works with
  • How many bits a processor uses to address memory
  • How many bits can move around at once

History of Data Size

A processor works with certain data sizes at a time.

  • Microcomputers started at 4 bits (with the Intel 4004). That turned out to be too little data to do anything of value -- even then, a character usually took 8 bits, and so with a 4 bit computer you were always having to do multiple 4 bit instructions to process a single 8 bit chunk (character) of data. That's not optimum.
  • Quickly, 8 bits became the standard. 8 bits made sense since a single character of text (upper or lower case, and all numbers and symbols), took 7 or 8 bits to encode. So 8 bits was a good size.... and that lasted for a few years.
  • While 8 bits was good for characters (back when characters were only 8 bits), it wasn't as good for number crunching. An 8 bit number (2^8) is only any whole number between 0 and 255. To do heavy math, you needed to work with more bits at once. The more the merrier. 16 bits could get you a value between 0 and 65,535 (an integer) or -32768 to +32767 if you liked signed math -- which is a lot more detail in a single pass. On top of that, instead of just having 256 different characters (Roman alphabet with symbols and accents), we went to Unicode (UTF16) which usually used 16bits of data and allowed for 65,000+ characters, which could add in most other languages.
  • While 16 was better than 8 bits for math, lots of numbers in daily use are larger that 65,000 -- so 16 bits was also requiring double-passes to get things done. Thus if 16 bits was better for math, then 32 was better still. 32 bits allowed a range of 0 - 4,000,000,000 (or +2B to -2B signed). That was good enough for about 99%+ of integer math. And with some tricky encoding, you could actually get a near infinite range of numbers with 8 digits of accuracy (fixed or floating-point math:a concept where the computer sacrifices some of the resolution of the number, so that it can have a mantissa (multiplier), and basically allowing numbers much larger, smaller and with a decimal point).
  • Then along came 64 bits and since this stuff is exponential, it gave us a lot more headroom for scientific stuff -- in a single pass (instruction). You could always to 64 bit, or 128 bit math, even with a 4 bit processor, it just took a lot more passes (instructions). While 32 bits was good enough for most things (and worked from the mid 80's until mid 2000's), for some scientific applications (floating point, and large integers), 64 bit was better.
In the early 1980's people used to add special "floating point processors" or FPU's (Floating Point Units) to help your main processor do this kind of math -- and make microcomputers behave like big mainframes and lab computers. By the early 90s, floating point units got added to the main processors (and are integral)-- and we've stayed there ever since. But there is a separation between kinds of data: 32 bits for integers (or short floats), and 64 bits for long floats.

How many bits of data

When we used to ask how many bits of data a processor worked with, it was easy. There was one unit, and it always worked in that amount of bits.

Now days there are 3 primary ALU's (arithmetic units), and each works on a different sizes:

  • Integer units are for smaller sized stuff
  • Floating point units are for higher resolution math
  • Vector units are even larger registers (128 or 256 bits) for doing the same thing to multiple smaller sized things (often many 8 or 16 bit chunks at the same time). Great for managing pixels, or characters.
  • GPU is like a vector unit on steroids: it can have hundreds of processors paired together, that all do the same thing to multiple smaller data chunks at the same time. Great for graphics.

This is all great for the data fidelity -- but we also wanted to deal with more data as well: how much the computer could address (or see/access at one time).

How many bits of address

Now a computer has an address for each and every memory location. 8 bits of address, mean that your computer can address 256 addresses (locations) - usually these were each one byte long, but in theory, they could be as wide as the computer needed them to be.

256 addresses isn't much -- so even 8 bit computers would often work with 16 bits of address to enable them to work with 65,536 bytes (or address 64K of memory). You'd be surprised what we could do with computers back then, even with that little memory. (The 70s were 16 bit addressing... mostly). Now the little controller in your mouse is more powerful than the 1970s computers that I started programming on.

32 bit addresses caught on in the mid 80s and were popular a lot longer. A 32 bit address, can deal with 4 Billion addresses (4 Gigabytes of memory). 32 bit addresses have been standard for quite some time, and will be for a while. But we are starting to get to the point where 4 gigabytes of RAM isn't that much. For some large databases or large 3D or math problems, 4 Billion locations is very small. Now most of us aren't mapping the human genome on our home computers; so it isn't like we're all bumping our heads daily. But it is getting to the point where video and graphics work especially could use more space. So we want to make room for it now. And so designers are looking at jumping to 64 bits, or roughly 16 exabytes of memory to prepare for the future.

Now an exabyte is a quintillion memory locations: or 1,000,000,000,000,000,000, or enough memory to track every cell of every person on the face of the earth. So we shouldn't bump our heads on that limit any time soon. The naming goes, mega (million), giga (billion), tera (trillion), peta (quadrillion), exa (quintillion).


For a few problems the extra address space helps; but not as much as you might think. 64 bits of addressing is a heck of a lot of memory, and as I said, 32 bits is good enough for most users today (and for the next 4 or 5 years or so). So going from 32 to 64 bit addressing, isn't a huge win for the average user, most of the time. And it comes with a cost: if you have to double the size of every address, everything gets bigger. (It takes more memory, and has to move more stuff around to do the same job).

The common work-around for 32 bit computers is that the computer just had many pages of 4 gigabyte chunks, and flips around to which 4GB page that it is looking at a time. This only rarely cost much in overhead (for paging around), so 32 bit addresses lasted for 30+ years, and even with 64 bit (or larger) computers, many will stick with smaller (32 or 40 bit addressing).

How many bits can move at once

While the computers are x# of bits, sometimes they talk to memory (or peripherals) on smaller or larger buses (connections). Obviously being the same width as the processor is good. But since processors are faster than memory, what if it could load 2 things at once? That would keep the processor fed better. A few designs did this, but the cost of all these connections on a bus is expensive and hard to do over distances (even as small as inside a computer case), so often the bus is less width than the CPU. While internal to the CPU when it's connecting one part to another, they can run things much wider. But there is a balancing act in design, and between all the sizes in your system. And if you make one part that is 10 or 100 times faster than the rest, it is just wasted potential, because it sits and waits for the other part to catch up all the time.

Conclusion

There is a murphy's law of communication (or should be) -- that no matter which way you mean something, others will assume you mean it a different way. And when talking about size, you could mean data size, path size (bus or internal), or address size. Generally, when weíre talking now days about chip size (how many bits), we mean is it a full 64 bit, non-paged, address and integer (data) support. Since it already has 64, 128 or 256 but support for other things.

Mostly, computers need to be balanced between how fast the processor is, with how fast the memory is, or the program you're using needs to be. More than that just wastes battery or something else... so there are reasons that computers have been 64 bits (mostly) for the last couple decades, and will likely remain so for a lot longer: it fits the problems we're doing. There are special units for special functions that work a lot larger -- but they're special units because most of the time they are not needed. So I think of it like a range extender on an electric car: great when you need it, but just something extra to haul around when you don't.

Written 2002.10.14 Edited: 2019.06.30