Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations bkrike on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

128 bit processors

Status
Not open for further replies.

stormbind

Technical User
Mar 6, 2003
1,165
GB
They aren't here yet, but I wonder what prevents them being made...

--Glen :)

Memoria mihi benigna erit qui eam perscribam
 
we haven't even got 64 bit working yet!

and why not 256/512/1024 bit etc (no vision, these people!)
 
1024? Jesus lmao. I dont even want to think of the cooling required for when that comes out lol.
 
Actually, I found out that the reason is: we have rubbish CPU architecture! [sadeyes]

Wider CPU registers (32bit, 64bit, 128bit, ...) means larger binary strings (even simple opcodes &c.) which incurs greater demands on bandwidth and store resources.

In a nutshell, 128bit would be a slow resource hog! Apparently, 64bit is slower than 32bit for most operations: most operations being very simple.

Wider CPU registers are only worth having if you need larger binary strings (i.e. manipulating very large numbers) which is presumably the case for current 3D graphics processors.

So why don't we have binary strings of dynamic length? Hmm..

--Glen :)

Memoria mihi benigna erit qui eam perscribam
 
The IBM 1620 supported variable length word sizes, proving that it is not a technical impossibility.

Does anyone know why modern processors have not followed that path?

Judging by the lack of response, this is probably not the right forum to discuss this. I'll just reply to myself until I get bored *rolleyes*

--Glen :)

Memoria mihi benigna erit qui eam perscribam
 
The x86 architecture used to have a mixed 8 and 16 bits architecture. The 8088 was fetching everything with 8 bits. Internally the opcodes were 8 bits while the data could be treated in 8-bit or 16-bit modes. THe 8086 was internally the same but externally was fetching everything in 16-bit chuncks. When it was fetching code, it was fetching two instruction bytes at a time and executing one a byte at a time.

The same continued with the 32-bit extensions and also for the 64-bit extensions like the AMD64 set. The BIOS of modern motherboards still runs in 8088 mode, with 8-bit instructions, even if the interface is 64 bits.

The motherboards with two RAM channels are trying to take this approach. Fetching 128 bits at a time, to speed up the overall data transfers between the processor and the external world. But benchmarks showed this approach to be far from doubling the speed of the system. So the rest of the internal CPU and the PC architecture, and the software accompanying it, has not evolved enough in performance to make this a worthwhile option.


 
It really depends on what "128-bit" refers to. In AMD & intel cpus, it refers to how many bits are used for memory addressing. The current 64-bit cpus can handle up to 17,179,869,184 gigabytes of memory.

So, you tell me why 128-bit cpus are not being developed.
 
The 8080, an 8-bit CPU, has 16 bits of addressing.
The 8088 and 8086, 16-bit CPUs, have 20 bits of addressing.
The 80286, a 16-bit CPU has 24 bits of addressing.
The Athlon64 has 40 bits of addressing. (only several thousands gigs :).

The CPU width is about the width of the data that can be processed internally, not the address range.


 
There may be another, more boring reason.....
Cost...

Are you going to throw 10's /100's of billions into a chipset that software doesn't require, or people really need? You'll probebrly find they are working on 128bit chipsets. After 64 bit Chipsets have been around for donkeys years, just not mainstream.

People are now finindg it cheaper and easier just to cluster multiple cpu's than one big monster, hence all the newer supercomputers are clusters of thousands of "standard" proccessors. On top of that if it fails, your going to fork out a couple of hundred rather than 10's of thousands for a single proccessor.

Stu..

Only the truly stupid believe they know everything.
Stu.. 2004
 
I think Wikipedia has a good summary of what you will find after googling and piecing it all together:

Wikipedia said:
32 vs 64 bit

A change from a 32-bit to a 64-bit architecture is a fundamental alteration, as most operating systems must be extensively modified to take advantage of the new architecture. Other software must also be ported to use the new capabilities; older software is usually supported through either a hardware compatibility mode (in which the new processors support an older 32-bit instruction set as well as the new modes), through software emulation, or by the actual implementation of a 32-bit processor core within the 64-bit processor die (as with the Itanium2 processors from Intel). One significant exception to this is the AS/400, whose software runs on a virtual ISA which is implemented in low-level software. This software, called TIMI, is all that has to be rewritten to move the entire OS and all software to a new platform, such as when IBM transitioned their line from 32-bit POWER to 64-bit POWER. Another significant exception is IBM z/Architecture which readily handles applications concurrently with different addressing expectations (24, 31, and 64 bit).

While 64-bit architectures indisputably make working with huge data sets in applications such as digital video, scientific computing, and large databases easier, there has been considerable debate as to whether they or their 32-bit compatibility modes will be faster than comparably-priced 32-bit systems for other tasks.

Theoretically, some programs could well be faster in 32-bit mode. Instructions for 64-bit computing take up more storage space than the earlier 32-bit ones, so it is possible that some 32-bit programs will fit into the CPU's high-speed cache while equivalent 64-bit programs will not. In basic terms moving 64 bits at a time to perform otherwise 32 bit work simply requires more processing effort to/from memory. However, in applications like scientific computing, the data being processed often fits naturally in 64-bit chunks, and will be faster on a 64-bit architecture because the CPU will be designed to process such information directly rather than requiring the program to perform multiple steps. Such assessments are complicated by the fact that in the process of designing the new 64-bit architectures, the instruction set designers have also taken the opportunity to make other changes that address some of the deficiencies in older instruction sets by adding new performance-enhancing facilities (such as the extra registers in the AMD64 design).


Pros and cons
A common misconception is that 64-bit architectures are no better than 32-bit architectures unless the computer has more than 4 GB of memory. This is not entirely true:

Some operating systems reserve portions of each process' address space for OS use, effectively reducing the total address space available for mapping memory for user programs. For instance, Windows XP DLLs and userland OS components are mapped into each process' address space, leaving only 2 or 3 GB (depending on the settings) address space available, even if the computer has 4 GB of RAM. This restriction is not present in Linux or 64-bit Windows.
Memory mapping of files is becoming more dangerous with 32-bit architectures, especially with the introduction of relatively cheap recordable DVD technology. A 4 GB file is no longer uncommon, and such large files cannot be memory mapped easily to 32-bit architectures. This is an issue, as memory mapping remains one of the most efficient disk-to-memory methods, when properly implemented by the OS.
The main disadvantage of 64-bit architectures is that relative to 32-bit architectures the same data occupies slightly more space in memory (due to swollen pointers and possibly other types and alignment padding). This increases the memory requirements of a given process and can have implications for efficient processor cache utilization. Maintaining a partial 32-bit model is one way to handle this and is in general reasonably effective. In fact, the highly performance-oriented z/OS operating system takes this approach currently, requiring program code to reside in any number of 31-bit address spaces while data objects can (optionally) reside in 64-bit regions.

[tab]

~cdogg
"Insanity: doing the same thing over and over again and expecting different results." - Albert Einstein
[tab][navy]For general rules and guidelines to get better answers, click here:[/navy] faq219-2884
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top