history_monk: (Default)
history_monk ([personal profile] history_monk) wrote in [personal profile] liam_on_linux 2024-09-21 09:56 am (UTC)

On 128-bit, 256-bit and so on machines, there doesn't seem to be any need for them yet.

A bit of history: the first widespread architecture with 32-bit addressing was the IBM System/360, announced in 1964. That could address 4GB of RAM; the first models that could have a thousandth of that, 4MB, shipped in 1967.

By the early 1990s, 32-bit addressing was starting to be a limitation. The first processors with 64-bit addressing shipped in 1991-92, and x86 got there in 2003-04. That kind of architecture can address 16EB (exabyte), of RAM. A thousandth of that would be 16PB (petabyte), or 16384TB. It's now 20-30 years since 64-bit addressing was introduced. Nobody builds systems with memories remotely that big: single-figure TB is reasonably common in servers. HP built a 160TB machines in 2017, but it was a one-off, part of a project that didn't work out.

Nobody needs a petabyte machine enough for it to be worth the cost. If someone invented a way to organise a computer to be much faster, or more resistant to breakdowns or security vulnerabilities, by having huge RAM, people would build them. But those inventions haven't happened yet.

RISC-V, which is the newest architecture with claims to be general-purpose, has reserved space in its instruction set for 128-bit addressing. However, nobody has seriously tried to design the instructions, because we need to learn practical lessons from petabyte machines before we design 128-bit ones (those quantities don't have names yet, because nobody uses them).

Post a comment in response:

This account has disabled anonymous posting.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting