Index Home About Blog
From: mash@mips.com (John Mashey)
Newsgroups: comp.arch
Subject: Re: Unbelievable SPECmark claims for Viking and Pinnacle

In article <12650@auspex-gw.auspex.com> guy@Auspex.COM (Guy Harris) writes:
>In the May 4, 1992 issue of *EE Times*, in the cover article "SPARC CPU
>race quickens", some unspecified person from Cypress claims that
>"[Cypress] estimates that Pinnacle can deliver 218 SPECmarks -- based on
>simulations, presumably.  Cypress also claims that Viking can deliver
>205 SPECmarks...."

There's a related article in Electronic News, May 4, p.1.
"DEC, Cypress in Alpha Talks".
This was T.J. Rodgers talking at Cypress annual meeting.
Some truly fascinating quotes; a quiz will follow... :-)

"Sun is doing triage on its suppliers," he said.  "They've gone and churned
up a whole number of suppliers all to support their business, which any one
of us can support, and then they pick pieces to give to people." [1]

"Cypress' Pinnacle is expected to be locked in a tight battle at Sun with the
Viking RISC processor being made by Texas Instruments. [2]
Pinnacle is expected to `tape out' this week, Mr. Rodgers said, meaning it
should reach first silicon in about a month." [3]

"Mr. Rodgers said that while Sun has not committed to using Pinnacle, `I'm
very confident, and I can't prove it, that we're going to take share
at Sun from Texas Instruments." [4]

"Mr. Rodgers said that Pinnacle and Viking should both
achieve slightly over 200 SPECmark performance [5], but that the Cypress part
has several advantages since it has fewer transistors, requires fewer
static RAMS, and is manufactured on a simpler process than Viking.
Since the TI part uses a BiCMOS process, it will be less scalable in future
generations than Pinnacle, he said."

NOW, for the quiz.  Recall repeated warnings about believing what you
read in the press without strong skepticism.... what's wrong? what's
to be skeptical about this story? 


[1] Triage: OK, I believe that.

[2] "locked in a tight battle"?  Hmmm.  Viking taped out a year ago,
and certainly has had immense resources applied by Sun & TI to its debugging.
But, in [3] TJ says they're just now taping out Pinnacle this week...
Now Viking *is* more complex, but still...

[4] He's confident they'll take market share away from the chip that Sun itself
has invested much of its own money in developing.  OK, I suppose it could
happen, but only if this chip comes up real fast.

[5] >200 SPECmark performance ... somehow, I doubt it :-)

How about calibrating further from past comments....
"Those who cannot remember the past are condemned to repeat it."
[comments are my editorials]

CYPRESS SEMICONDUCTOR SEMINAR SERIES BOOK, 1989 [~January 1989]

"Cypress 7C600 Performance
	Will scale to 36 VAX MIPS and 8 double precision Linpack MFLOPS at
	50Mhz"  [not yet; hard to calibrate VAX MIPS; LINPACKS not close ]

"Cypress CMOS Leadersip
	0.8 Micron DLM CMOS
	33Mhz initial frequency of operation
	50+ MHz devices at maturity [>3 years later: not yet]
	Competition is 2 to 3 generations behind" [B.S.]

"Cypress SPARC Product Summary
	7C605 - Cache Controller, Tag and Memory Management Unit (CMU)
	- Sampling at 25 and 33 Mhz 3Q89"  [this is MP cache controller]
	[Note: that's the chip used in Sun Galaxy machines ... which
	finally shipped 4Q91 (and I can't find the source, but it is well
	known that it took *many* revs)... that is, from when they said they'd
	sample, until production parts was *2 years*.

"Cypress 7C600 Future Directions
	Release current chip set at 40Mhz in 1989 and 50Mhz in 1990.  [no]
	Produce derivatives optimized for specific market segments, e.g.
	vector FP [haven't seen that yet]
	Provide the ability to execute multiple instructions per clock
		- over 80 MIPS in the single processor mode
		- Customers under NDA starting 1H89" [well, taping out 2Q92]

A similar story (many of same foils) was presented by Roger Ross
at Hot Chips, August 1989.

NOW, CONSIDER PINNACLE CLAIMS:
1) Now, *no one* including Cypress, can say for sure, at tapeout time,
just how long it will take to get, from tapeout to production silicon ...
in systems, with software (and maybe even, with compilers reasonably
tuned to *this* pipeline, as opposed to Viking's.  it certainly must run the
existing software, but people seem to find, especially with superscalar
designs, that you want to tweak the compilers to get the performance.)

2) However, why don't we guess ...  it turns out, that fairly consistently,
it takes about 9-12 months from tapeout of:
	an aggressive new chip, of an existing architecture, which
	however, has a reasonable base of existing software
to get to production-quality, "these don't burn up or have stop-ships bugs"
stuff.  [Note: a low-cost design or one tweaked from another usually go
faster, and obviously, throwing immense resources may speed it up,
although this rule seems to fit Intel as well, from past history.]
This rule of thumb says: production Pinnacles, maybe 1Q93.

3) The Pinnacle chipset has:
	-A superscalar CPU+FPU+ 8K I-cache, ~1M transistors, 550x550 mils
	(302K sq mils, certainly smaller than the 400K sq mil Vikings, P5s,
	etc)
	Doing the usual arithmetic, depending on whether they use 4T or 6T
	RAM cells, sounds like 320K-480K transistors in the cache,
	leaving it approx 500-700K transistors for logic, buffers, etc.
	(The MMU is on the next chip).  *That* means its logic is equal or
	more than an R4000's (which has on-chip MP-support, cache control,
	MMU, and 64-bits, all of which burn transistors...) or i486, or
	a little simpler than a Viking's; i.e., whatever it is, it is *not*
	a trivial chip. :-)
	-CMTU - a cache/MMU chip that is a modified version of the CY605
	mentioned above, about 450x450mils, about 200K sq mils.
	- 2-4 SRAM chips, that are tweaked versions of some Cypress SRAM.
	All of these are hooked together with a private bus, which is

Now, what would a random person wonder about?

1) Is the CMTU taped out? Does it have first silicon?  If not, when?

2) Are the special SRAMs taped out? First silicon?  If not, when?

3) It appears that one needs *all* of CPU + CMTU + SRAMs to work,
i.e., since they use a particular private bus, it would appear hard to
test them in a system without the others working at the same time.
Is this true?  If so, this might imply that much bring-up work will
have to wait until the *last* of the three is working....

4) How does this CPU compare in size with other chips that Cypress
fabs?  It *appears* to be 1.5X larger than the cache-MMU chips.
It is certainly much larger than the SPARC CY701s.

5) When doing bringup of a new chip, one often must put workarounds
in the compilers and/or operating system to get past bugs, in order to
find other bugs.  Will Cypress be doing this? or Sun? of somebody else?
From past experience, close coupling of chip people and software people
seems a major help to speeding bringup.

6) Some analysts say that Cypress is expecting major revenues of Pinnacles
in 2H92..... For this to happen,  they've got to get a good 4Q92
*in volume production* which seems a little fast, but *could* be possible.

*Is there some reason to expect tapeout->production to be much faster
than anyone else seems to do, especially considering that Ross Technologies
did *not* design the 601 (which already existed), and that this CPU must
be larger and more complex than the 605, which took many spins to get to
production?* [There could be such reasons of course, but a random person
might be skeptical until they heard the reasons.]

7) Various clock rates have been ascribed to Pinnacle (I've seen as high
as 75).  Please reread the history of Cypress' predictions of
future CY601 performance, GIVEN THAT THEY ALREADY HAD WORKING CY601s.
Why will *these* performance predictions be better?

----------
Well, those seem like some reasonable questions to ask....
-- 
-john mashey	DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP: 	 mash@mips.com OR {ames,decwrl,prls,pyramid}!mips!mash 
DDD:  	408-524-7015, 524-8253 or (main number) 408-720-1700
USPS:	MIPS Computer Systems M/S 5-03, 950 DeGuigne, Sunnyvale, CA 94086-3650

From: mash@mips.com (John Mashey)
Newsgroups: comp.arch
Subject: Re: Alpha(moon base)

In article <DOCONNOR.92Feb15081732@oxygen.inews.intel.com> doconnor@oxygen.inews.intel.com (Dennis O'Connor) writes:
>Well, the i960 is designed and marketted for the embedded market.
>The Alpha, I think, was designed for high-end computing ? Very
>different markets with very different needs.
>
>You may be thinking of the i860, intel's RISC supercomputing product.
>Either it or the "586" would, I think, be the things to compare Alpha to.
>Especially the 586 : it and Alpha are reaching the market (perhaps?) near
>the same times, and both are mainly mysteries at this point :-).

Some data culled from here and other sources, such as the 1990 ISSCC
DEC paper "System, Process, and Design Implications of a Reduced Supply
Voltage Microprocessor." and random other info:

1) Alpha: .75micron/3-metal, 3.3V.  1.68x1.39cm or 659x545mil (approx),
or 359K sq mil, 1.68M transistors, 16KB total cache.  I don't know if they
use 6T cells or 4T cells; the earlier process described in the 1990 ISSCC
used 6T cells.  Thus, one would expect that of the 1.68M transistors,
something like: 640-960K are used in the SRAM arrays on-chip

I'm told there are
also some additional processing steps to do the Reduced-Voltage-Swing
I/O at these speeds, but I can't confirm what they are, although the
comment comes from usually reliable sources. These are made at a fab
in Hudson, MA; I don't think anywhere else, although maybe somebody
from DEC can correct me there.  In general, this is a non-vanilla
process, about which at least one (unnamed, but not a MIPS partner)
semiconductor vendor says "We could
make that.  We'd have to build a new $400M fab, but we could make it."
Coincidentally, DEC announced within a week or two that they were building
a new $400M fab in Hudson....

Hence, this is exactly what DEC should be doing for mid-range and up machines,
i.e., something that will go as fast as possible in CMOS, using RVS
I/O of necessity to run external busses faster, even if the cost/chip
might be high, and the rampable volume relatively low for workstations or
PCs, or difficulty to use with common PC/workstation (5V) components.
It should be no surprise to anyone that DEC has some fairly
aggressive CMOS, given the clock rates necessary to run VAX architecture
in to get performance out of small-N chipsets.

2) 586: .8micron/3-metal, run using the same fab module (Albuquerque fab)
that is used to produce:
	i486DX-50 (which is an entirely different design, recall, than the
		i486DX-33 done in 1micron/2metal CMOS)
		This die is 1.19x.69cm, 468x273 sq mil, of which 8KB
		(and hence around 480K transistors for cache+tags, etc,
		and 200K transistors for ROM microcode storage.)
		Intel uses 6T cells, unlike most SRAM vendors who use 4T,
		hence figure 8KB *1.25 (for tags, etc) ~= 10KB = 80Kbits =
		480K transistors, leaving about 700K for everything else).
	i860XP	410x610 = 250K square mil, 2.55M transistors, of which
		about 1.9M are included in the cache, leaving 650K for
		the rest.
		[If you look at this die, it is mostly a pair of caches with
		a CPU attached; a trend that may not disappear, although
		it is exaggerated by the use of 6T cells, of course.]
	i960MX	(a 570x670mil = 382K square mils)  This is NOT the high-volume
		commercial control part, but the military version for JIAWG.
	and I don't know what else.

I think the 82495 (DX/XP) may also be fabbed in this process, but I'm
not sure of that one.

The 586 has been described as having about 3M transistors.  Useful
guess would be:

2X 8KB caches (+ 25% overhead) = 20KB = 160Kbits = 960K transistors.
2 X 16KB caches (+ 25% overhead) = 40KB = 320Kbits = 1920K transistors.

The first would leave 2M transistors for everything else, the second
1M transistors for everything else.  It seems reasonable to expect
2 16KB caches, as a 2M transistor "everything else" budget seems a bit
high (even with 64-bit integer unit, and of the multiprocessing stuff,
the R4000 has about 600K for "everything else", and its hard to believe
the 586 could be 3X more complex in that part.  Certainly there must be
some microcode, but one would guess that there were less things in
microcode than in the 486, hence <200K transistors would seem a good guess.

3) All of this brings me back to the unobvious comment about $1 filet
mignons.

Customer to butcher: "Why do your steaks cost $8 here?  That's outrageous.
Across the street they're advertising filet mignons for a $1 apiece."
Butcher: "Well, why don't you buy them there?"
Customer: "Well, they're all out of them right now."
Butcher: "Oh, well actually, in that case, I'll sell you the ones I haven't
got for 50cents."

More seriously, this illustrates a couple rules of thumb that it would
be wise to remember in digesting all of the things we read about:

R1 The following are ALL important:
	1: Chip price
	2: Cost to put the chip into a system, i.e., including special
	   support chips, difficult board design [because the REAL cost
  	   of a board must include th cost for the boards that fail]
	3: Performance
	4: Time to market
	5: Volume availability APPROPRIATE to the target market
	6: Appropriate software support
	7: Distribution channels
AND, no matter how good you are on 6 of these, lack of the other one can
kill you.

R2	Q: What's the price/performance of a $1 filet mignon?
	A: Terrific.  I'll buy it, I'm hungry.
	Q: Well, the first one comes out one year from now.
	A: Forget that, let's go to McDonalds.
It ALWAYS takes longer to go from chip announcements to systems that
the average person can buy than you would think.  In the microprocessor
business, a real good guess is that even aggressive organizations (at least
in the merchant world) do well to go from tapeout to having customers'
systems on the market FOR REAL, in more than trivial volume, in 9-12
months.  [I have lots of data points for that, both CISC and RISC.]

R3	Vendor: Here's an INDY5000 race car, for only $20K
	Press: Wow! great, what a deal.
	Customer1: Wow! Sounds great, I'll buy one.
	Vendor: Ok, here it is.
	Customer2: Wow! Sounds great, we need to refit out fleet of
		delivery vehicles for rapid delivery, so we can beat
		those flying DHL trucks. I'll take 50,000 this year.
	Vendor: Oops.  They're on allocation.  I can get you 2 each month
		for a while.
	Customer2: Can't you ramp up the volume?
	Vendor: well, they're hand-built by elves.  Maybe we can hire some
		more elves, but they're in short supply.
That is: sometimes the sticker price is great, and it's possible to get
a few, and you can read about them, but if they cannot be delivered in
the volumes appropriate to the task, they're also irrelevant.

Another way to put this: the price/performance of something on severe
allocation ... is irrelevant.

R4	Of the 7 factors
	1: Chip price
        2: Cost to put the chip into a system, i.e., including special
           support chips, difficult board design [because the REAL cost
           of a board must include the cost for the boards that fail]
        3: Performance
        4: Time to market
        5: Volume availability APPROPRIATE to the target market
        6: Appropriate software support
        7: Distribution channels
It's usually easy to understand 1, 6, and 7 by casual perusal of magazines.
You can sometimes calibrate 2, although the arguments rapidly turn into
EEish arguments that can be EXTREMELY difficult to comprehend if you're not
one,
(Sometimes I get to explain superpipelining or why dynamic slew rate control
might be important to Wall Street financial analysts .. some of whom are
quite sharp.  Nevertheless, this is a NONTRIVIAL task.)
#3 Performance is at least something you can usually get some data on these
days, although we still have a long way to go.  One must be especially
careful to make apples-apples comparisons among things labeled
"measured", "simulated", "projected", "expected", etc, and calibrate
them versus the vendors' past histories of crystal-ball reading.
Before systems are actually shipping in reasonable numbers, #4 (time-to-market)
is often quite hard to figure out.  Remember that major chips have often
been announced as "in production" only to discover killer STOP SHIP-class
bugs later.
#5 is often REALLY hard to tell, as if the volume is lower than desired,
every effort will be made to obscure that fact....
Of course, the "appropriate" volume varies widely, from millions (PC world)
down to 10s (after all 10 $30M supercomputers a year could be good business.)

R5 Q:Why do the steaks cost $10?
	A: Well, that's what they cost.  We can at least get them for
	you in good volume, and we can ramp up the number of cows pretty
	quickly, but they'll still cost at least $9.

That is, big chips still cost money. Note that every one always says that
chip costs are driven by volume; on the other hand, if you have to build
a new fab, it's expensive, no matter what!

Well, this has gotten long, but I've had some questions lately to which
this discussion seemed a useful answer.  Of course, I can't resist one
slight commercial, which is the following question:

Of the next-generation (and selected current ones) single-chip processors:
{MIPS R4000, DEC Alpha, HP Snakes-superscalar,
IBM RSC (RIOS Single-Chip), MIA PowerPC, 486-33, 486-50, 586,
 Sun/TI SuperSPARC}

a) One is a huge chip in a relative exotic process.
b) Three of them are done at internal fabs, where I don't know how many
there are, and how much wafer capacity is available to these specific
parts.  Of these, I THINK that at least 2 are done at 1 fab apiece.
At least one uses a very aggressive, somewhat exotic process that is nontrivial
to second-source in any short period.
c) Two of these are done in a fab that is already capacity-constrained
with other parts, although additional modules in that fab can be done to
help produce more, over time.  The one in production seems strongly
capacity-constrained at this point.  One would expect the forthcoming
one to be about 2.5X larger.
d) One of them is built at high volumes in multiple fabs.
e) One of them is being built at 6 fabs, and could be built at >10
fairly quickly, using a vanilla sort of process that is used to run
large numbers of parts (like SRAMs) that are good for debugging a process
and keeping it stable.

So the questions are: which is which? Of these, which are likely to be
able to built in large quantities (and thus bring the price down) this year?
Next year?  How many are likely to be fast, cheap, and available all at
once? [For the volume desktop]
[I think I know of one for sure.  One might make it if it can be sped-up
significantly, especially on the integer side. Another might make it,
if it turns out to really be buildable at the target clock-rate,
and gets the projected performance there, and somehow escapes some
likely supply-constraints.]

Well, off to Germany, Spain, Czechoslavokia, and the UK. cheers.
-- 
-john mashey	DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP: 	 mash@mips.com OR {ames,decwrl,prls,pyramid}!mips!mash 
DDD:  	408-524-7015, 524-8253 or (main number) 408-720-1700
USPS:	MIPS Computer Systems M/S 5-03, 950 DeGuigne, Sunnyvale, CA 94086-3650

Newsgroups: comp.arch
From: mash@mash.wpd.sgi.com (John R. Mashey)
Subject: Re: Extrapolated costs and yields of various microprocessors

In article <C8Bo9s.2uv.2@cs.cmu.edu>, lindsay+@cs.cmu.edu (Donald Lindsay) writes:

|> Which is why it's scary that design costs are rising. Microprocessor
|> Report estimated the MIPS R4000 design cost as $100M, with the
|> upcoming superscalar MIPS design to cost no less.

This is way high for actual chip development effort  (process development
and fab-building is of course much *more* expensive, but we don't design
processes and fabs for these things).
I don't know the actual number offhand, but I'd guess on the order
of $30M-$40M would be more like it, depending on what you count.
After all, a typical chip design team is 50-100 people for 3 years,
i.e., 150-300 staff-years.
If you assumed loaded salaries of $100K-$200K (i.e., incl equipment
and everything else), you get $15M-$30M.

(BTW: we all clearly do wimpish projects in this business, consider that
that Taj Mahal supposed took 20K people for 20 years, i.e., 400K staff-years.)
 
|> At this rate, design cost/chip is going to start dominating
|> fabrication cost/chip. What ever happened to the CAD dream?

Well, at some point, wire delays will dominate enough that you daren't
make individual CPUs any more complex, and so chips will start looking like
>1 CPU (replicated) + lots of RAM (replicated). After all, we're already
starting to replicate functional units like ALUs.

The next major CPU generation has a bunch of tricks to play, but after that,
it's hard to see how individual CPUs will get a *lot* more complex.

Also, note the interesting barrier to entry into the CPU business that now
exists: verification test suites .... which anybody who's been doing this
for a while has now built up huge sets of...

-- 
-john mashey    DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP:    mash@sgi.com 
DDD:    415-390-3090	FAX: 415-967-8496
USPS:   Silicon Graphics 7U-005, 2011 N. Shoreline Blvd, Mountain View, CA 94039-7311

Newsgroups: comp.arch
From: mash@mash.wpd.sgi.com (John R. Mashey)
Subject: Re: Intel Inside: (really: some simple arithmetic)
Date: Fri, 18 Jun 1993 04:10:43 GMT

(Sorry for the delay: I posted this a few weeks ago before traveling,
but it only got to comp.sys.intel;  I'm pleased to see there has been
plenty of discussion on this topic in the meantime, and at least some
people who actually  *know* a lot about the topic have posted some things.


In article <dewar.738624346@schonberg>, dewar@schonberg.NYU.EDU (Robert
Dewar) writes:

|> All these calculations about the "forgone profit" from selling Pentiums
|> is a little silly. The market for 486's is not infinite, therefore giving
|> up the opportunity to make 486's has zero cost unless you would be able to
|> sell those 486's. I think we can assume that Intel's intention is to supply
|> all the 486's and Pentium's that the market will bear! 
|> 
|> Certainly that's what Wall Street assumes in cranking up Intel's stock 
|> price so much in the last few days!

It's not silly at all.
1) As it stands, Intel can sell all of the 486s and Pentiums it can make;  AMD
has said it might be able to build ~700K 486s this year.
2)  Using Wall Street to analyze serious technical/economic issues is not
usually recommended:
	How many Wall Street analysts understand yield calculations?
		A: a few, but not many.
	
Anyway, let's go back to serious analysis: I apologize for the back-of
the envelope calculations, as they obviously weren't convincing enough
(although the range of answers turns out to be reasonable).  Instead,
I went to our chips guys and asked them to run the numbers using the
estimation models we use (and which are also calibrated versus ex-Intel
people's experience).

These are yield numbers, starting with the 0.8 micron CMOS sizes,
and then doing a 0.8X shrink [assuming that there aren't any pad-limit
effects that stop you from shrinking the total die size.]
These all use 6-inch wafers, and use 3 different values of the
effective (total equivalent) defect density: 5/sq cm, 2/sq cm, 1/sq cm.
(8-inch wafers get more die/wafer, but ratios don't change much).
The predicted yield is an average of 4 of the common ways to compute
it, and the average has been found to be a reasonable guide in the past.

Chip	Width	Height	Gross	Defect	Yield	Defect	Yield	Defect	Yield
	cm	cm	die	#	die	#	die	#	die
486	 6.9	11.9	187	5	13	2	 36	1	 84  A
Pent	16.7	17.6	 47	5	 0.4 B	2	  1.9 C	1	  5.5 D

486-0.8X		305	5	41.6	2	114	1	175
Pent-0.8X		 71	5	 1.3	2	  5.7	1	 14.9

So:
1) The defect-#s for chips:
	a) Vary according to the type of the chip, i.e., how complex,
	how much processing, things like having lots of SRAM (but no
	redundancy) can hurt you, etc, etc.  Think of this as wrapping up
	every source of problems into one overall number , i.e., it is not
	defects in wafer material.
	b) Will start high, and then go down according to the learning curve,
	and then finally flatten out, with different values according to which
	chip you're talking about.

2) Knowledgable people think that the current 486DX2 yield is a bit under
50%, which means they are around (A), 84/187 = 45%.

3) People guess that Pentiums are much closer to defect=5 (B), than 2 (C),
meaning that they're probably getting 1 Pentium wafer/average (compared to 84).
If they instantly got as far on the curve as 486s ... (D), you'd
have the 84/5.5 = 15X comparison  (which is very close to what the
back of the envelope gave).  Right now, the number is probably more like
40X or more 486s ....

4) As can be seen, doing an 0.8X shrink helps both chips.  IF they *both*
got to 2 or 1 at the same time (which is not likely, but if they did),
then the yield ratio ends up somewhere between 11X and 20X, once again covering
the back of the envelope  numbers.

So: one more time:
	It is up to Intel: 486 prices are declining 3%/quarter;  maybe they'll
	go down a bit faster with other vendors in the market.  Unless
	something weird happens, they can sell every 486 they make at prices
	that get them $400 profit apiece.  Every Pentium will forgo profit of
	$6000-$8000 for a while, which for 100K units is $600M-$800M,
	which is actually noticable... but it's their choice.
	If I were Intel, I'd probably do just what they appear to be doing:
	For a while, Intel will make *just enough* Pentiums:
	a) To have presence in the market & have perf numbers to market
	b) To help get down the learning curve (one must assume a shrink
		is well under way)
	c) To have some as encouragement to its customers.
	d) To minimize the hit on profit.
	
	It wouldn't surprise me if allocations bounce around (they already
	have...) depending on how it's going.

I note that Intel sold something like 400K 486s in the first 12 months of
shipments  (more than any RISC in 1990, in fact, more than all the
systems RISCs put together).  Note that Pentium numbers seem like being
about the same size, or less, than some of high-end RISCs ...

Also, I note that despite the 66MHz numbers ... most system
vendors *mostly* announced 60MHz products for any near-term shipments,
which tells you what people expect to get (i.e., not many).

Once again, these are still all approximations, and I've consulted with
people who are extremely knowledgable in this area (since I'm not), so
I still invite comments, and especially better numbers, from people
who *know* this stuff...

-john mashey    DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP:    mash@sgi.com 
DDD:    415-390-3090	FAX: 415-967-8496
USPS:   Silicon Graphics 7U-005, 2011 N. Shoreline Blvd, Mountain View, CA 94039-7311

Subject: Re: SGI - You guys are missing the point
From: mash@mash.engr.sgi.com (John R. Mashey) 
Date: Oct 23 1995
Newsgroups: misc.invest.stocks,misc.invest

In article <45ut2i$kg0@apollo.albany.net>, akozak@hourglass.com (Al Kozakiewicz) writes:

|> High-end systems aren't going to go away, but I wonder if the vendors 
|> in this space who have the resources to source their own chips (DEC, IBM, 
|> HP, SUN) don't have a significant advantage over those who must source 
|> from the outside, especially those like SGI who source from a FAB-less 
|> supplier (if SGI's systems are no longer based on MIPS, I take that last 
|> part back).

Some facts:
1) Sun & SGI/MIPS have no fabs.  Sun's chips come (mostly) from TI.
        MIPS has never had a fab ... but has long-term relationships
        with various chip companies, including:
        NEC: #2 in world chip revenue (after Intel)
        Toshiba: #3 or #4 (Motorola is either #4 or #3)
Note: in 1994, there were about 1.7M MIPS chips made.

2) HP has fabs ... but deal with Intel is believed to have been done
in order to obtain access to Intel fabs.  Industry press tends to think this
is because the volume of HP PA chips is not high enough (~300K in 1994)
to justify the kind of fabs needed.

3) DEC has own fab.  Unclear whether or not they'll build another one
from scratch.

4) IBM has fabs.

Note: in this game, *anybody* must, sooner or later, run their chips
in high-volume fabs, whether they own them or not.  To own a fab, but not
have it full, is not a long-term healthy idea.

-- 
-john mashey    DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP:    mash@sgi.com 
DDD:    415-390-3090    FAX: 415-967-8496
USPS:   Silicon Graphics 6L-005, 2011 N. Shoreline Blvd, Mountain View, CA 94039-7311
From: mash@mash.engr.sgi.com (John R. Mashey)
Newsgroups: comp.arch
Subject: Re: Exponent's new BiCMOS process
Date: 28 Dec 1995 21:18:22 GMT

In article <4b9g77$f97@news_1.cyrix.com>, Jeff Lohman <jefflo> writes:
|> Organization: Cyrix Corporation
|>
|> I could be all wet on this, but I recall there are a couple of ex-BIT
|> (Bipolar Integrated Technology) folk on this. BIT was built upon a
|> "lower-power" (near TTL dissipation) ECL process that was near
|> "high-power" ECL speed from Motorola, Fujitsu, Hitachi, et. al. If
|> that's true, then I wouldn't underestimate what they are capable of,
|> at least technically.

The MIPS R6000 was built in BIT's technology, as was the ECL SPARC (that
Sun never shipped, but FPS used).

Note: with all due respect to the folks at Exponential,
*announcing* a new high-clock-rate microprocessor for a years away, in an
aggressive technology ... is *not* the same as delivering it.

Just offhand:

GaAS
	Prisma SPARC (out of business)
	S.P.E.C. SPARC (??? don't know what ever happened)

Bipolar
	ECL SPARC (as noted above)
	R6000 (was actually delivered ... but through nightmarish yield
		gyrations that I wouldn't repeat for anything; there's nothing
		like having a bunch of $100K systems awaiting a couple chips
		and having a month's yield zeroed...)
	88K implementations described by DG and Norsk Data
	Hitachi HP PA chip (don't know if that ever shipped?)

BiCMOS of course has a happier record in microprocessors; people have mentioned
several examples where vendors have switched back to CMOS later.
	


-- 
-john mashey    DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP:    mash@sgi.com 
DDD:    415-933-3090	FAX: 415-967-8496
USPS:   Silicon Graphics 6L-005, 2011 N. Shoreline Blvd, Mountain View, CA 94039-7311

Index Home About Blog