Sorry for the delay - ended up dealing with numerous phone calls...
> Its cool, I try not to accuse, but We still must take your word for
> it. Without a tangible copy of them, who is to say you didn't just
> make it up. ...
My word is good. Here is my main site btw:
http://www.sgidepot.co.uk/sgi.html
I've built up a particular reputation for my advice on matters SGI,
having helped hundreds of companies & individuals over the years
(Dreamworks, ILM, Hyundai, US Navy & Airforce, Univ. of Alaska, etc.)
I don't work for SGI precisely because the advice I give out is too
honest.
😀 There's a little bit of me in Star Wars II... (ILM uses
my site for staff training, or used to; doubt they have many SGIs
these days though).
I have over 200 SGI systems of my own btw.
> ... This isn't a thing I say just to make you look bad, but
> an opportunity for you to prove your credibility. ...
If you want to know what people think of me, just search for my name
on: forums.nekochan.net (the main SGI forum).
> ... Wow... I should
> work for the government!
Eek, that's a road to ruin these days. ;D
> Yes. I agree. Lets just hope with Windows 7 we will see the speed increases.
Ironically I do think Win7 will be an improvement, mainly because
otherwise it would mean they'd learned nothing from the Vista
experience and that seems unlikely. Ah well, time will tell I guess.
> They do. Crysis runs like a dream on my machine. The 4870x2 helps a
> lot at 1920x1080 with 4xAA. No chop at all!
4870x2 eh? Nice! 8) I was talking about 2560 x 1600 though, with
everything maxed. Here's the Stalker ref:
http://www.guru3d.com/article/stalker-clear-sky-graphics-card--vga-performance-roundup/5
I guess if it had perfect scaling the 4870x2 would give about 30fps at
this level. My card would manage about 5 or 6.
😀 Funny how fast
things change...
Hmm, rather strange that tomshardware only tests up to 1920 x 1200.
So what do you get with FRAPS when running Crysis at 1920x1200, 4xAA,
8xAF, Very High Quality? tomshardware's numbers say a 4870CF (which
should be much the same as a 4870x2) gives 14.5 fps with these settings:
http://www.tomshardware.co.uk/charts/gaming-graphics-charts-q3-2008/Crysis-v1-21,758.html
Hmm, I had the distinct impression from reading numerous forum sites
that the current rage for those with top-end cards like your 4870x2
is playing at 2560 x 1600. What do you run it at? I run my 8800GT at
2048 x 1536, just a dribble inbetween.
But I don't use AA in
Oblivion (the high res combined with 16X AF and max detail settings
works better and is faster) and of course AA in Stalker isn't an
issue (again I just run it at 2K with 16X AF and all detail settings
maxed). Not installed CoD4 yet, trying to avoid opening the box lest
it devour time I ought to be spending trying to earn a living.
😀
Mind you, one thing I've noticed about review sites: I do tend to get
better results than many reviews often show for various game fps
scores; do other people find this is the case aswell? Do you? Reading
recent reviews of the GTX260 Core 216 and other cards, my 8800GT
performs significantly better than many sites' own figures suggest it
should. Dunno, maybe the RAM/mbd combo is helping, but I had the same
experience with my old Asrock mbd and X1950 Pro AGP, getting better
numbers than review sites were seeing with much more expensive mbds
(mine was only $70) and PCIe versions of the X1950. Especially wierd
given my CPU is only a 6000+, ie. 3DMark06 scores are distorted
downwards somewhat by lower CPU scores compared to Intel quad-cores
with the same gfx. Based on reviews of the 8800GT, I was delighted to
get a 3DMark06 of 11762:
http://www.sgidepot.co.uk/misc/mysystemsummary2.txt
http://service.futuremark.com/compare?3dm06=7303357
It's the SM2/SM3 scores that pleased me the most - the low CPU score
was no surprise of course.
> I agree. But again, we are talking about Microsoft.
Did you hear the tale from the former deputy MS CEO years ago? (from
his book; forget the guy's name offhand). This is back in the days of
developing Win 3.1. He came into a room where people were discussing
the woeful speed of the code that redraws onscreen GUI window panes.
He looked at the code and asked, "who wrote this sh*t??" Gates walked
out of the room. Someone said, "He did.", jerking a thumb at the
departed Gates.
😀
> There is a little between 30fps and 60fps.
You're kidding right?
😀 I notice a huge difference, but then I'm
very used to high quality displays having been involved with SGIs
and visual simulation stuff for so long. In such industries, there's
a saying:
"60Hz, 30 hurts".
😀
> ... Above 60fps, you won't see it. ...
I can, but yes, most can't. It varies enormously between people and
was one of the issues I studied for my dissertation (side effects
of playing Doom; ref Washington Post, LA Times, Seattle P.I.).
Check the back of the box for the Ultimate Doom combo edition - the
PR quote is mine.
> ... So if you can't see it, is it an issue?
Clearly not in terms of gaming for the majority. What I meant was, if
just the choice of OS is impacting on 3D speed in a significant way,
that *is* an issue. It can be more extreme than that though; I
remember years ago a Texaco employee saying they saw a 100% speedup
when switching from Windows to Digital UNIX on their Alpha system
(I think it was).
These days, many companies use the same consumer products for doing
proper work, not just games, and they benefit from every ounce of
extra speed they can get.
> Please see the IT crowed episode 1 for Mosses explanation of
> invalided memory.
Linux isn't immune to poor coding of course though. When I tried
Slackware on my laptop, I was most surprised at the way is was
grabbing so much RAM when first booting. Kinda slow. This was quite a
while ago, perhaps it's better now. Really should try Gentoo sometime
I suppose.
> I'm confused.... Honestly... Can you explain this a different way?
Apologies, sometimes I'm too used to what I'm referring to, forget it
might not make sense.
😀
See:
http://en.wikipedia.org/wiki/SGI_Visual_Workstation
The VW320, released in 1999, used a unique architecture in which the
system only had _one_ pool of main memory for everything (main RAM,
video, textures, etc.), ie. a UMA design like the IRIX-based O2 (SGI
called it IVC for the VW320), but with much faster 3D speed and
higher memory bandwidth. This has major advantages for certain types
of task, in particular large-scale 2D imaging (very fast 2D fill
rates), uncompressed video work, VR (urban modeling, etc.),
volumetric medical/GIS, broadcast graphics - anything that involves
lots of texture and/or an interplay between texture and video. This
meant, for example, spectacular performance for apps like Shake, or
processing large 2D images, or modeling 3D objects with huge texture
sets. The system had a max RAM of 1GB, so in theory it could provide
over 800MB for textures in 3D work, ie. just limited by main RAM
size. A central ASIC called Cobalt (the real heart of the system)
handles all main 3D functions aswell as RAM access, but the main CPU
(single or dual PII/PIII, up to max dual-PIII/1GHz) does all geometry
and lighting calculations. I never bothered making a diagram of the
VW320, but my O2 page has a diagram which conveys exactly the same
idea (IVC works in the same way):
http://www.sgidepot.co.uk/o2-block-diag-2.gif
(MRE = Memory & Rendering Engine, DE = Display Engine, IOE = I/O Engine)
The basic gfx speed (textured fill rate) is fixed (around 430M
full-featured pix/sec), while geometry/lighting speed scales with CPU
power. With the best possible dual-1GHz config, it can outperform a
GF3, which for its time was astonishing, though few upgraded to that
degree when the product was current as the costs were too high (bad
marketing, overpriced reseller model). The system has no North Bridge
or South Bridge at all. It supports NT4, Win2K and Linux, has a
dedicated PCI64 bus just for the system disk, video I/O ports
included as standard, and various other cool ideas.
Just to emphasise: when a 3D app requests a texture, in a normal PC
this data must be copied from main RAM to the gfx card, thus limiting
texture upload speed to either PCI speed or the early AGP rates as
they were back then, which btw thanks to MS not doing proper coding
in NT4 was only the same as normal PCI speed when using NT, ie.
slow. On the VW320 though, no data needs to be copied, it's already
where it needs to be (main RAM _is_ video RAM), so just pass a
pointer and that's that. Max texture upload rate from RAM to Cobalt
was thus more like 3GB/sec, very fast indeed back then.
Disadvantages are a much lower peak bandwidth between main CPU and
RAM, so it's not so good for tasks that are mainly CPU/RAM intensive,
such as number crunching, video encoding, animation rendering, etc.
Some other bad things were the use of proprietary memory which was
too expensive, and the use of 3.3V PCI which rather limited available
option cards.
The VW540 system used the same architecture, but used up to four XEON
PIII/800 CPUs instead and had a max RAM of 2GB (I have a quad-XEON
PIII/500 system with 1GB RAM, used for uncompressed video editing).
The 320 normally shipped with an IDE disk, while the 540 normally
shipped with U2W SCSI. The 540 was very popular with defense
companies btw, eg. the UK's Ministry of Defense used it for the
Tornado fighter programme. And when forces went into Bosnia, they
took with them thirteen VW320 systems (6 months before the system
officially launched) as they were by far the fastest systems
available for dealing with large 2D sat images. The 320 was the first
time I'd ever seen any PC load a 50MB 2D image in less than a second.
There were later VW systems that did not use the IVC design (230, 330
and 530), but they were just conventional PCs and were totally pointless
overpriced VIA chipset yawn-boxes.
Anyway, where I worked as the main admin back in 2000 to 2003
(www.nicve.salford.ac.uk), they were dealing with large models of
urban areas, large in the sense that the models typically had about
200MB of texture, but not really that many polys in the scene (low
tens of thousands tops). The VW320s used (a dozen of them) were
single PIII/500 or 600, couple of duals, most with 512MB RAM. Four
years after initial purchase, the more complex models researchers
were dealing with could be navigated at about 10fps. The choice was
too upgrade the CPUs/RAM (quadruple the GE speed, double the ability
to cope with lots of texture), or replace them entirely with modern
PCs. The latter path was much cheaper of course (SGI's prices were
crazy - remember what I said earlier about why I don't work for them!
😀)
So the dept. ordered half a dozen sets of parts/cases and built their
own new PCs, all GF4 Ti4600, PF/2.4, etc. On paper, waaaay faster than
the VW320s.
Trouble was, because it made no difference at all to performance, the
modelers had been using large composite textures in the models, ie.
16K x 16K pixels (50MB file), dozens of smaller textures in a single
image, sub-textures accessed during rendering simply by referring to
coordinates and width/height crop within the image. Since texture
data does not have to be copied to dedicated video RAM, there is no
speed hit at all for using this approach. Plus of course they were
luxuriating with full 32bit images every time, ie. no use of reduced
formats, masks, decals, colour maps and other techniques for reducing
texture usage. And there were no level of detail constructs - no need
when dealing with more texture doesn't affect performance.
The new PC gfx had 64MB RAM IIRC, leaving probably not more than 35MB
for textures. Apart from not being able to hold the full data set
anyway, the use of large composite textures meant insanely intense
memory thrashing, ie. constantly reloading multiple 50MB images (just
one of which couldn't fit onto the card) again and again for every
frame. So instead of the expecting minimum 5X speedup, the initial
performance was just 1 frame per _minute_.
😀 Hmm, that's actually
600X slower...
So, they had to redo the models, stop using composite textures, be
more sensible about image quality/techniques, and build in new level
of detail controls.
My point was that the use of a different kind of system had made the
designers lazy, despite warnings from me. Likewise, ever more RAM and
CPU speed available as a baseline in PCs today makes OS designers
lazy. Why code efficiently when the system will be quicker and so be
able to cope with the increased overhead? Trouble is, this hammers
users of existing systems who are forced to upgrade due to MS
policies on OS support, etc.
> You can't say its bad now with a retail copy when you used a beta.
> Many changes have happened.
In many ways yes I'm sure, but it's still the case that the
recommended RAM config for a Vista system is 50% more than for XP.
> I think from 2000 to XP there was a great jump of RAM needed. 2000
Actually I'd say 2000 was kind bad anyway.
😀 That was one policy of
mine that found favour when I was admin at NICVE: even the
secretary's basic PC had 1GB RAM so it could cope with the RAM hungry
MS office apps.
> sweet spot was around 256MB-512MB. XP's is 2GB. If we use 2000 sweet
For gaming, yes, but 1GB is plenty for XP for most tasks. My XP
laptop has 1GB and it never has an issue with RAM resources.
> spot with 512MB and compare it to XP's sweet spot of 2GB, XP has a
> 400% increase (check my math).
#include <humour.h>
Hate to point this out but 2GB over 512MB is a 300% increase, not
400%. ;D (100% more = 2x more, 200% = 3X more, 300% = 4x more)
Percentages are a PITA. :}
> I am glade that you didn't take me to serious. I get into issues
> because I know I come off really aggressive. So, thanks.
Hehe, I had the 1st ever web site on the N64; believe me, your posts
are definitely not aggressive. *grin* You should see what gets said
when 14 year olds argue about console A vs. console B...
😀😀
Ian.