Index Home About Blog
From: mash@mash.engr.sgi.com (John R. Mashey)
Newsgroups: comp.arch,comp.lang.c
Subject: Re: 64-bit chips, 32-bit compatibility?
Date: 28 Sep 1995 02:36:19 GMT
Organization: Silicon Graphics, Inc.

In article <44boso$ies@dg-rtp.dg.com>, kahn@romulus.rtp.dg.com (Opher
Kahn) writes:

|> >architecture for the next 30 years, i'd make it clear that it's really 64-bit,
|>               ^^^^^^^^^^^^^^^^^^^^^^
|> Well, I would be careful about such statements.  It was MUCH LESS than 30
|> years ago that people thought that 16 bits was a lot and 32 bits was more than
|> we would need this century.

Let's try a more detailed analysis, I've posted something like this before,
but forgot to save it, so let me try again:
PHYSICAL ADDRESSING
1) For many years DRAM gets 4X larger every 3 years, or 2 bits/3 years.

2) Thus, a CPU family intended to address higher-end systems will typically
add 2 more bits of *physical address* every 3 years, and will typically
be sized to fit the *largest* machine you intend to be built.
Given the normal progress, and usual need to cover 2-3 generations of
DRAMs, depending on timing of products, you need at least a 4:1 range,
and maybe a 16:1 range for extreme cases.

For example, 36-bit physical addresses support 16GB memories ...
and there already have been shipped single-rack microprocessor boxes with
16GB using just 16Mb DRAMs; there are of course, more in the 4GB-8GB range.
Of course, a 32-bit physical addressing machine can get around this with extra
external-mapping registers ... assuming one can ignore the moaning from
the kernel programmers :-)


Of course, some kinds of system designs burn physical memory addresses
faster than you'd expect.   In particular, suppose you build a system
with multiple memory systems.  A minimal/natural approach is to use the
high-order bits of an address to select the memory to be accessed.
The simplest design ends up leaving addressing space for the *largest*
individual memory, so that smaller memories leave addressing holes.
I.e., suppose each memory might range from 64MB to 1GB (30 bits).
With a 36-bit address, one can conveniently use 2**6 or 64 CPUs
together.   Of course, if individual memories might go to 4GB (factor of 4),
then you are now down to 16 CPUs.

Note: of the next crop of chips, the physical address sizes seem split
between 36 and 40 bits...

VIRTUAL ADDRESSING
1) Is visible to user-level code, unlike physical addresses, which usually
are not.

2)  I've claimed that one rule of thumb says that there are practical
programs whose virtual memory use is 4X the physical memory size.  (I.e.,
having seen some like this ... and seeing that if they start paging much more,
they get slower than people can stand. :-).  Hennessy claims this is
a drastic under-estimate, i.e., that as memory-mapped files get more use,
and files-with-holes, one can consume virtual memory much faster ...
and I agree, but it is hard to estimate this effect.

FORECASTS for 64->128-bit transition:
1) If memory density continues to increase at the same rate,
and virtual memory pressure retains the 4:1 ratio, and we think we've just
added 32 more bits, to be consumed 2 bits/3 years, we get:
	3*32/2 = 48 years
and I arbitrarily pick 1995 as a year when:
	a) There was noticable pressure from some customers for 4GB+
	physical memories, and a few people buying more, in "vanilla"
	systems.
	b) One can expect 4 vendors to be shipping 64-bit chips,
	i.e., not a a complete oddity.
Hence, one estimate would be 1995+48 = 2043 to be in leading edge of
64->128-bit transition, based on *physical memory* pressure.
That is: the pressure comes from the wish to conveniently address the
memory that one might actually buy.

Of course, the multiple-memory system issue above pulls that in a few
years ... however, one can deal with that in the time-hallowed way of adding
extra mapping information, without bothering user-level code with changes.

2) On the other hand, if files-with-holes and file-mapping of large
files get much heavier use, the *virtual memory* pressure grows much faster
than a constant factor above the physical size ... and my best guess yields
around 2020.  Note that "minor" implementation issues like die space,
routing, and gate delays, especially of 128-bit adders & shifters are
non-trivial, so people aren't going to rush out and build 128-bitters for
fun, just as people matched timing dates of their 64-bitters to their
expected markets.  Of course, if somebody does an operating system that
uses 128-bit addressing to address every byte in the world uniquely, *and* this
takes over the world, it might be an impetus for 128-bitters :-)

Of course, all sorts of surprises could occur to disrupt these scenarios.
Note however, that the common assumption that it took N years to go
from 32->64 means that it would take N years from 64->128 ... is
incompatible with the normal memory progress, i.e., 64->128 is 2N.


-john mashey    DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP:    mash@sgi.com 
DDD:    415-390-3090	FAX: 415-967-8496
USPS:   Silicon Graphics 6L-005, 2011 N. Shoreline Blvd, Mountain View, CA 94039-7311

From: mash@mash.engr.sgi.com (John R. Mashey)
Newsgroups: comp.arch
Subject: Re: 64-bit chips, 32-bit compatibility?
Date: 28 Sep 1995 02:56:40 GMT
Organization: Silicon Graphics, Inc.

In article <zalman-2709951544340001@198.95.245.190>,
zalman@macromedia.com (Zalman Stern) writes:

|> In terms of word size and address space needed 25 years from now, Alpha
|> will have little difficulty going to 128 bits much as MIPS, SPARC,
|> PowerPC, etc. went from 32 to 64 bits. However, the 360 example shows that
|> this might not be necessary. Note that the RISCs which have already done a
|> 32 to 64 bit transition will have a harder time going to 128 bits. (All of
|> those chips also support more data types than Alpha meaning more opcode
|> space used, etc.)

Not a big deal ... most of us still have opcode space left, although
perhaps not as cleanly as one might prefer, and there is certainly a
well-established model for how you do this.


-john mashey    DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP:    mash@sgi.com 
DDD:    415-390-3090	FAX: 415-967-8496
USPS:   Silicon Graphics 6L-005, 2011 N. Shoreline Blvd, Mountain View, CA 94039-7311

From: mash@mash.engr.sgi.com (John R. Mashey)
Newsgroups: comp.arch
Subject: Re: 32 vs 1024 Bit Processor
Date: 31 Jan 1996 22:07:09 GMT

In article <4eo5d5$go6@info.epfl.ch>, "Stefan Monnier" <stefan.monnier@lia.di.epfl.ch> writes:

|> I don't think the problem is merely technological.
|> It's much more likely to be:
|> 
|>         what would you do with those 1024bits ?
|>         what's the point ?
|>
|> if you can find a way to take advantage of a 1024bits datapath in more
|> than a few special cases (like cryptography), maybe you could convince
|> people to start thinking about it.

Note: I think an N-bit CPU has N-bit wide integer registers and datapath.

1) In an R10000, the 64-bit-wide integer datapath is about 20% of the width.
1024 is 16X larger.  To get a 1024-bit datapath to be the same fraction of
the width of a same size chip, you only need about 10 shrinks, i.e. 10
chip generations; assuming 3 years apiece, that's 30 years before you'd
even want to think about this.  If you were willing for it to be a
larger fraction of a chip, you might save 2 generations.
(You can jiggle the numbers, but that's the idea.)

2) Of course, since wires often don't shrink as fast a transistors, there is
the "minor" issue of running a lot of 1024-bit wide busses around the chip.

3) Besides the space issue, consider that designers are fighting hard to
reduce the delays caused by long wires on chips ... and are not likely to
be thrilled by needing to do 1024-bit-wide adders and shifters.

4) And as Stefan notes, you need a *good reason* to even think about
it:

I've lost the posting, but we went thru this last year, but just discussing
when *128*-bit could come in.  Since we're right on the 32/64-bit boundary,
and 64 bits addressing has added 32 more bits, and you can argue that we
consume 2 bits every 3 years (to track DRAM), that's 3*32/2, or 48 years.
For various reasons, I've predicted that somebody would do it earlier,
maybe around 2020 or 2030, assuming current growth rates.

Put another way, about the time you *might* consider 1024 to be possible,
you'll be *thinking* about doing 128.

BOTTOM LINE:
	a) 32-bit CPUs are already insufficient for some uses.
	b) 64-bit CPUs are likely to be sufficient for a *long time*;
	   Note that there are already <$35 64-bit micros available, so this
	   is not exotica.

-- 
-john mashey    DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP:    mash@sgi.com 
DDD:    415-933-3090	FAX: 415-967-8496
USPS:   Silicon Graphics 6L-005, 2011 N. Shoreline Blvd, Mountain View, CA 94039-7311


Index Home About Blog