Index Home About Blog
Subject: Re: SGI slipping
From: mash@mash.engr.sgi.com (John R. Mashey) 
Date: Nov 13 1995
Newsgroups: misc.invest.stocks,comp.graphics

A week ago, in reply to a question I posted a few simple questions to
Intergraph about the demo that "proved" the new Intergraph workstations
blew away SGI...  and also emailed them to several Integraph employees,
and this will go to more.

Possibly I've missed the reply, and if so, perhaps someone would point
me at it.

In any case, take this as a 2nd request for
simple, non-proprietary information on a publicly-shown demo:
        c) What were the run-times?
        d) What was the program and its input files?
                And if the program was produced by Intergraph, was the
                tuning done on SGI commensurate with that on NT?

Following is the text of the original post:

From: mash@mash.engr.sgi.com (John R. Mashey)
Newsgroups: misc.invest.stocks
Subject: Re: Any opinions on SGI: comments on Intel's demo at AEA
Date: Mon, 6 Nov 95 00:23:54 1995


In article <79959-815499985@mindlink.bc.ca>, WHAM@mindlink.bc.ca (FM) writes:
|> Intel had a demo to compare Indigo 2 with their new P-pro and showed faster
|> performance over SGI's .  Would you care to comment on the demo on a price
|> to performance basis and other rellavent benchmarks?  thanks.

Yes, although I have to do this secondhand from talking to press and
analysts who attended, or to other people who saw it on TV broadcasts.
(It showed some Intergraph P6 box beating an Indigo2, running some
recalc or rendering problem that took several minutes, with the
Intergraph finishing ~30 seconds earlier,
and the Intel or Intergraph speaker then
turning to the Indigo and making remarks like "still going".
Lots of comments were made to effect that SGI's graphics performance was
now blown away by PC-class machines (or words to that effect).

I'm short on time, back in a few days, but basically:
1) It appears to be a *CPU* demo, not a graphics demo, i.e., something
that runs a few minutes to display a few frames is *not* an interactive
graphics benchmark ... and in fact, has very little to do with the
graphics hardware.

2) It was versus a 200MHz (15-18 months old) R4400 with Extreme
graphics (>2 years old) ... although the latter appears to be irrelevant,
i.e., while the cost comparison was verus the I2Extreme ... an Indy with
much less expensive graphics would have run at least as well...
Now, it is not implausible that a 200Mhz Pentium Pro be faster than
a200Mhz R4400 on raw CPU benchmarks, although it is not guaranteed either
(the R4400 has a larger L2 cache, which doesn't help small benchmarks
much, but may, or may not make a difference on larger ones.
Of course, we have been shipping faster R4400s (250Mhz) for a while,
and for some FP-intense problems, people use 75MHz R8000s.)

3) Actually, this brings to mind the wish to ask some of the Intergraph
posters for a little data, which surely cannot take more than a few minutes
to find, and post, and most of which cannot possibly be proprietary:

        a) We think the Indigo2 was 200MHz, 1MB cache, 64MB memory, Extreme.
           We think it had a 1280x1024 24-bit color display.
           is this right?
        b) We think the Intergraph was also 200Mhz, although we don't
           know if it had one or two CPUs, and we don't know if it had
           the 1152x864 display (i.e., 76% of the 1280x1024 size) or
           a larger display, and we're not sure which graphics board was
           in there ... although from the appearance of the run, it likely
           doesn't matter much.
But the interesting questions are:
        c) What were the two actual run-times?

        d) Exactly what was the program and the input dataset that produced
          the demo?  I.e., are we allowed to know this, or must we guess?
        d1) Was the program some third-party CAD package, i.e., something
          provided by somebody else, and likely to have been optimized
          for both platforms?  If so, what are the versions?
        OR
        d2) Was the program an Intergraph package that is shipped (or will be)
          as a product on Windows/NT, but not on IRIX?  If so, can you
          provide any backup for the idea that there might have been equal
          tuning and optimization of the package running on IRIX?

OK: djallen@ingr.com & rbarnes@ingr.com: it can't take much to answer
those questions...  I'll be back Wednesday.


-john mashey    DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP:    mash@sgi.com 
DDD:    415-933-3090    FAX: 415-967-8496
USPS:   Silicon Graphics 6L-005, 2011 N. Shoreline Blvd, Mountain View, CA 94039-7311




-john mashey    DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP:    mash@sgi.com 
DDD:    415-933-3090    FAX: 415-967-8496
USPS:   Silicon Graphics 6L-005, 2011 N. Shoreline Blvd, Mountain View, CA 94039-7311

Subject: Re: SGI slipping? [really: "rape" is pretty serious accusation]
From: mash@mash.engr.sgi.com (John R. Mashey) 
Date: Dec 07 1995
Newsgroups: misc.invest.stocks,comp.graphics

In article <30c459c2.729631@news2.cts.com>, mikem@alike.com (Mike
Morrison) writes:

|> I saw two systems do this, one at a private meeting last SIGGRAPH in
|> LA, it was a dual processor P6 running at 100Mhz (running SoftImage).
|> Then again, at the P6 release press conference in San Francisco, Andy
|> ran an advanced CAD application on an Indigo and a dual processor P6,
|> this time on a 200Mhz system, I can't remember the name of the
|> application off hand, but it was simply running a script to create a
|> 3D object from scratch and then shade & rotate the object in
|> real-time. This P6 system was from Intergraph and did have some 3D
|> acceleration on it. As far as prices go, at Comdex the buzz was that
|> you could expect dual P6 systems by the end of '96 at under $2,000.

If this is so and meaningful:
        a) Most of the PC business will have gone under, since regular
        Pentiums will have to be selling for much less.
        b) Intergraph, which is selling 2-CPU systems in the $20K range,
        will be gone for sure, since one could trivially buy such a $2K
        system, add a $500 graphics accelerator [if one believes the
        claims of such being equal to the best workstations :-)], and run
        the exact same software.


Thanks for reminding me about this demo (the press release).
1) Over the last month, I have several times, in this newsgroup, challenged
Intergraph people to explain the could-not-possibly be proprietary details
of this demonstration, so that reasonable people could evaluate the
meaning of the demo, and understand:
        a) Did the benchmark actually use much of the Indigo2Extreme's
        graphics hardware (whose price was certainly included), or was it
        really a CPU benchmark masquerading as an interactive graphics
        benchmark?  (i.e., would it have run just as fast on a low-end Indy?)
        b) What were the corresponding run-times?
        c) If it was really a CPU benchmark of an Intergraph internal
        application, was it tuned and optimized as well on the Indigo2,
        or perhaps was it terrifically well-tuned on the Intergraph ...
        and not on the Indigo2.
        d) All of this is strange, since Intergraph literature claims many
        applications also available on SGI ... and hence, one would have
        expected plenty of head-to-head demoes on interactive 
        graphics performance...
        Of course, Intergraph's literature repeatedly claims:
        "Intergraph is the world's largest company dedicated to supplying
        interactive computer graphics systems."
        (Can anybody explain a use of the English language that lets that
        be true?  Note that Intergraph's Web pages mention servers, just
        like Sun or SGI...) 

2) So far, after posting politely  (and emailing, just to make sure),
it's been a month, and nobody has chosen to answer some very simple
questions.  Thoughtful people may want to consider why...  and
calibrate further examples from the same sources.... 

-john mashey    DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP:    mash@sgi.com 
DDD:    415-933-3090    FAX: 415-967-8496
USPS:   Silicon Graphics 6L-005, 2011 N. Shoreline Blvd, Mountain View, CA 94039-7311

Subject: Re: Debate: PowerMac "trounces" Pentium....???
From: mash@mash.engr.sgi.com (John R. Mashey) 
Date: Jan 13 1996
Newsgroups: comp.sys.powerpc,comp.sys.intel

In article <30F70A12.8FFA6F0@mindspring.com>, Damir Smitlener
<damir@mindspring.com> writes:

|> William Taylor wrote:

|> > Can you say "realtime 3D"? It is coming quickly, and Intel PC's are
|> > going to be way behind the curve. A 604 is 1/6 the price of a P6/200
|> > yet offers comparable FPU speed.
|> > Pentium's are even farther behind.
|> 
|> 
|> Pshaw. This is meaningless for at least 2 reasons. First, 604 computers
|> do not have any price advantage over Pentium Pro computers. Second,
|> nobody in their right minds would do 3d without hardware acceleration,
|> and the options for that will be fairly equal on both platforms with
|> maybe a small price advantage on the PC side.
|> 
|> It's the accelerators that made SGI such a hot item, not "its" CPUs.

Well, since you mentioned SGI ...

1) Lots of people, in their right minds, *would* do low-end 3D without
hardware acceleration for geometry, but using the CPU's FPU.  *SGI* does.

2) This makes perfect sense as part of product line:

        a) You might design a system with everything built-in, then you
           get what you get [i.e., like a video game].
        b) But most people design systems that can take a range of graphics
           boards with different characteristics, with:
           - a range of frame-buffer sizes
           - 8-bit ... 24-bit color
           - Z-buffering, or not
           - texture [and size of texture memory], fogging, real-time-
             anti-aliasing, various image and video processing, etc, etc].
           - hardware geometry acceleration.

        c) Consider the lowest-cost graphics board that *has* hardware
           geometry ... in many designs you can save some more money if
           you delete that hardware, and that lets you hit one more price
           point, and it may be adequate if you're doing fairly light
           3D.

        d) It does help to have a processor like the recent MIPS R5000,
           that, for its cost, is especially good at 32-bit floating-point
           (i.e., pipelined 1/cycle, with 4-cycle latency).  (note, BTW,
           that the SPEC fp benchmarks have very little 32-bit floating-point,
           so they don't necessarily tell you much.)

        e) Of course, there is a huge variation in performance: just inside
           SGI's product line are performance metrics that vary by at least
           1000:1 between a low-end Indy and a high-end Onyx + Reality
           Engine2s.  [This is why the commonly-made statement:
                "Add this graphics board to a PC and you'll get the 3D graphics
                  performance of an SGI workstation"
                is fairly vacuous, i.e., a more precise comparison than
                to something with a 1:1000 range would make more sense :-) ]

-- 
-john mashey    DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP:    mash@sgi.com 
DDD:    415-933-3090    FAX: 415-967-8496
USPS:   Silicon Graphics 6L-005, 2011 N. Shoreline Blvd, Mountain View, CA 94039-7311

Subject: Re: Debate: PowerMac "trounces" Pentium....???
From: mash@mash.engr.sgi.com (John R. Mashey) 
Date: Jan 15 1996
Newsgroups: comp.sys.powerpc,comp.os.linux.advocacy,comp.sys.mac.advocacy,
	comp.os.os2.advocacy,comp.sys.amiga.advocacy,comp.sys.intel,alt.2600

In article <4d7rp7$e7d@newsbf02.news.aol.com>, rexniger@aol.com (Rex
Niger) writes:

|>    Actually people have been doing it for years. 3D acceleration is a
|> great but expensive luxury, I think people are more drawn to the SGI

Well, a slight argument here:
        (a) Whether it's a luxury or not depends on what you're doing.
        (b) Despite SGI's skill in this ara ... we always have customers
        who want *more* (both more at a specific price, at more at almost
        any price) and know what they'd do with it if they could get get it!
        I must admit, it is frustrating to talk with a research head of a
        Fortune50 company who says: "If you can do XXX, we'd buy 50 of them,
        whatever they cost, otherwise we'll buy 1 of your fastest."
        A: sorry, we can't do that yet.
        "So when?"
        A: sometime after the year 2001 we'll get fast enough.

<good discussion of accelerators>
. 

|> >>It's the accelerators that made SGI such a hot item, not "its" CPUs.
|> 
|>    I'm pretty sure that their Impact series of accelerators transforms the
|> polygons as well as rasterizes them, but I don't know about the Extremes
|> and other earlier accelerators. They could very well have worked like the
|> one's now shipping for PCs and Macs.

of more-or-less-current products [Extremes are 2.5 years old, and rapidly
        being replaced by Impacts] from top to bottom:

                        hardware geometry                               
Onyx    RealityEngine2  Y
        VTX             Y
        Extreme         Y
Indigo2 Maximum Impact  Y (2 Geometry/Image engines, 960MFLOPs for transforms)
        High Impact     Y (1 Geometry/Image engine, 480 MFLOPs for transforms)
        Extreme         Y
        XZ              Y (4 GE7 geometry engines, 400 MFLOPs for transforms)
        XL              N
Indy    XZ              Y
        XL/24bit        N
        XL/8bit color   N

So, as  I said in an earlier post, even if most of what you do uses
hardware geometry acceleration, there's always one lower price point,
at least, where you can reduce the price of the graphics board by omitting
the hardware acceleration.  Note that a given application may be:
        (1) Light on CPU and graphics
        (2) Light on CPU, heavy on graphics
        (3) Heavy on CPU, light on graphics
        (4) Heavy on CPU, heavy on graphics
and people buy different systems to cover these, although they do like to
be able to run the same binaries.

If it seems weird to have this many variation, it's sometimes hard to
explain: it's easier to see.  A good demo is when we show people around
our demo center, starting with a rotating lighted model:
        a) On an Indy, it's OK, flat-shaded.
        b) On an Indigo2 Extreme, it's OK Gouraud-shaded.
           But go back to the Indy, turn the Gouraud on, it slows down.
        c) On an Indigo2 Impact, it's fine with reflection-mapping.
           Go back to the Extreme, try that ... and it basically stops.
           Do not go back to the Indy.
Put another way, everybody does graphics demoes that look good, by making sure
that the machine can do the job ... it's only when you see two systems
side-by-side trying to do the same thing, with 10:1 to 1000:1 differences,
that the differences become truly obvious.

        
-- 
-john mashey    DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP:    mash@sgi.com 
DDD:    415-933-3090    FAX: 415-967-8496
USPS:   Silicon Graphics 6L-005, 2011 N. Shoreline Blvd, Mountain View, CA 94039-7311

From: mash@mash.engr.sgi.com (John R. Mashey)
Newsgroups: comp.arch
Subject: Re: Graphics Performance Plateau? (was Re: DEC/Intel suit)
Date: 6 Jun 1997 15:35:50 GMT

In article <y4soywyyuw.fsf@mailhost.neuroinformatik.ruhr-uni-bochum.de>,
Jan Vorbrueggen <jan@mailhost.neuroinformatik.ruhr-uni-bochum.de> writes:

|> you@somehost.somedomain (stan lass) writes:
|>
|> > I read somewhere that a monitor resolution of 4k x 4k was
|> > as much as most people could appreciate.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|> Well, let's see. I certainly need the equivalent of an A4 page, often A3 will
|> be better (for drwaings, or two A4 pages side-by-side). Also, 400 dpi is a
|> minimum if you want me to drop paper in favor of whatever display device you
|> can supply. A3 is 42 x 29.7 cm, i.e., 16.5 x 11.7 in. Thus, I want about
|> 6600 x 4700 pixels. (At 300 dpi, it's still 5000 x 3500 pixels.) OK, close.

In addition, there are of course demands for much higher resolution,
needed to supply products to "most people", although not needed in
volume, most likely, since likely to be expensive, i.e., some of the
demands for visual simulation are for coordinated monitors/projectors with more
pixels.  Example, from a real conversation:

VP/Research for a very large automobile company:

Q: Can you give me a faithful simulation of what I see out the windscreen of
a car, such that a person with 20/20 vision cannot tell the difference?
If so, I'll buy 50 of them, whatever they cost.
If not, I'll buy one of the best you have now, and I'd like to know when you
think you will be able to do that?

[salesperson's eyes light up: 50 * (.5-1M) = $25M-$50M =
get to attend President's Club for a long time.]

A (me): Sorry, not yet.  On current trendlines, no earlier than 2004.

In particular, it is apparently still difficult to have enough resolution
to show people roadsigns at the distance they can read them in real life.

Other areas that wish for better resolution include defense image analysis,
and closer to home, medical image analysis.

--
-john mashey    DISCLAIMER: <generic disclaimer: I speak for me only...>
EMAIL:  mash@sgi.com  DDD: 415-933-3090 FAX: 415-932-3090 [416=>650 August]
USPS:   Silicon Graphics/Cray Research 6L-005,
2011 N. Shoreline Blvd, Mountain View, CA 94043-1389


From: jfc@athena.mit.edu (John F Carr)
Newsgroups: comp.arch
Subject: Re: Graphics Performance Plateau? (was Re: DEC/Intel suit)
Date: 6 Jun 1997 22:25:16 GMT

In article <5n9aom$6l3$1@murrow.corp.sgi.com>,
John R. Mashey <mash@mash.engr.sgi.com> wrote:

>Other areas that wish for better resolution include defense image analysis,
>and closer to home, medical image analysis.

I work for a company which makes medical printers.  Our marketing people
claim our product is the industry leader in image quality.  We print
approximately 4000x5000 pixels at 12 bits per pixel.  I'm a programmer,
not an image scientist, but my understanding is that precise intensity
control is more valuable than spatial resolution.  The average eye is
an 8-9 bit device but many radiologists want 10+ bits and can tell the
difference, or at least want 8 bits going through a lookup table (e.g.
choose 256 from 4096).

CRTs are 6-7 bits in an office environment, or 8 under good conditions
with some effort.  There are some technologies based on modulated reflection
which might bring good intensity control to displays (see the April 28 EE
Times, pages 37-38), but I think there may be problems making them large.


--
    John Carr (jfc@mit.edu)



Index Home About Blog