OpenGL 3 & DirectX 11: The War Is Over

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
be able to run both .exe and whatever MAC executables are.

Linux, unlike OpenGL, truly is open source. Your average free genius could make exe files run on it if he wanted to. But he can’t. All of them together can’t, and it’s not due to lack of effort either.

Google might be able to solve this in a better way than Wine did, but I wouldn’t expect it to happen over night. Emulating Mac probably wouldn’t be as difficult since it its also based on the open source OS. This too will likely be an issue to Steve since who would than pay 2x more for the same hardware components to get an OS that could be run on Google OS that can run windows stuff on it too. Such an OS would leave Apple out of computer business and if exe and d3d support could be had, it would surely leave MS angry as well.

So, while not an impossible task, its an unlikely one.

Take Chrome as an example. Everything you need is there

It’s a cool idea and a neat browser, but hardly unified and complete. The biggest problem with it is that it has no mouse gestures. I cant imagine web browsing without them. It feels so dark ages.

There are many more missing things, like flashblock (who wants to use their bandwidth to load commercials that no one cares about and only slow down you computer and bog the internet connection while at it)….

Chrome, once finished, has a potential to be better than its competition.

I want to see a new OS, a good OS. I think Google is the only one that is capable on delivering.

We fully agree, but that’s not saying that I’d be willing to migrate to the new imaginary OS just yet 😉

I guess we can ask that about home computer and Quads with soon to be Hex and Octal cores (physical, double logical).

More cores = faster 3d rendering- simple and useful. I can have too many cores. There is no such thing as too much speed.

Many forget that some apps as written will run slower when run
as 64bit (all data requests 2X larger), unless hardware changes
mitigate the higher bandwidth requirement.

What you neglected to realize is that those applications would
a) run much faster than anyone expects them to
b) become obsolete
c) become Mac heroes and another reason why you should pay 2x more for the same hardware, just so you could run old 32bit applications the way Mac was designed to…

I’m no techie so I cant say for sure, but:

In win xp x64, all 32 bit applications run to max speed and any slowdown brought on by 64bit OS is too negligible to notice- if any.

64bits mean ability to address more than 2gb of ram per application and more than 4gb of ram in total- useless for your solitaire and chess, but very useful for 3d, or video editing- the last stand of the dying OS.

Once OGL is “fixed” Mac will need 64bits for 3d as well. Right now, having a comparable 3d api is paramount if it wants to survive the 3d war- and gathering from the posts above and Steve’s stance on 3d- not very likely at the moment.
 
I didn't 'neglect to realise' those things btw. I merely meant most peope *assume* 64bit
means faster when that's not always the case, and I wasn't talking about how 32bit apps
run on a 64bit OS.

I'm fully aware of what 64bit means. 😉 I own more than 200 SGIs... 😀

Ian.

 


I think knocking Apple out is a good idea (I own a Mac, although good it just missing something). I'm sorry, knocking them out of the computer market. They are needed in the portable media market.

Would you as a consumer care about who is angry with who? All you want is something that works, is reliable, and fast. Google gives people just that.

As for it happening over night... that would only leave my pants wet. :??: They wouldn't release anything as a full edition rushed(Err, um.... Ok, I don't think they would release a Beta OS). The time would be taken to get it done right. Plus, who said they haven't been working on it. After all, they did deny Android, but here we are.




Yes, I agree with you. But all we have of yet is a beta option. In time we will see how they develop it. I believe once they are done with it, it will rival IE and FF.





Agreed.




I agree. Speed is good. But I am a humble gamer. There is a point when hardware will get too far ahead of the software. When will we hit this point? Have we? Will we? Will we continue to get more speed from more cores or just have more cores sitting around? I think this is a main issue that many people are starting to think about. For servers and professionals it's great, but what about home users?



OK, the only thing I can really help you back this us is by looking at 32-bit OS. Did that 32-bit OS slow 16-bit operations or 8-bit operations? It normally makes things faster in life. But there are those rare occasions. Moving from XP pro to Vista Ultimate 64-bit, I notice a speed boost. Ok, I did get a new system that is a Quad and at the time had only 4GB of RAM (now I have 8GB). But it is commonly know to have at least 4GB of RAM for 64bit Vista.

Maybe time has come for Steve to roll over. Apparently an Apple a day didn't keep the doctor away.
 
Would you as a consumer care about who is angry with who? All you want is something that works, is reliable, and fast. Google gives people just that.

I don’t give a flying fig about angriness- as a consumer. I’m sure MS/Steve would find a way for google to care.

all we have of yet is a beta option. In time we will see how they develop it. I believe once they are done with it, it will rival IE and FF.

Yup. Chrome is a gem in the rough. No denying that.

There is a point when hardware will get too far ahead of the software.

Software has always had to adapt to hardware. There is no such thing (in the perceivable future) as too fast hardware. Programs need to be written with parallelization in mind. Single core cant get much faster. Ok, we throw in more cores. GPUs have been doing that for a long time now. Its about time CPUs joined.

When we reach holodeck visuals, there COULD come a time where a gamer could say that he already has the best visuals and that no faster computer is needed. The rest of us will disagree, even than.

For servers and professionals it's great, but what about home users?

When your web browsing is built to use multiple cores, is time when your home users starts benefiting from multi tasking – think chrome here 😉

Maybe time has come for Steve to roll over. Apparently an Apple a day didn't keep the doctor away.

Hehe. Good one. But the next Mac OS, snow leopard, is supposed to be fully 64 bit, not just virtual (and useless) like now. Combine that with (possible) future OGL and we might have a competitive platform.

Is that platform worth the same money as the windows one? ….eh… maybe.
Is it worth 2x more $ for the same hardware? Not a chance.
 
eodeo:

I do agree with you a lot. I just don't know about Apple (as I type my reply to you on my Macbook). The issue with Apple is we will have to wait for the future to maybe see a difference. Why wait when you can have it now with Microsoft. The other issue with Apple is their customer respect, or the lack there of. You know they are going to charge their customers out the butt for their new OS. Evan if you just bought one. Stevie acts like he hates his customers by giving them nothing... other then something else to buy. Steve makes me angy!






Mac killed my inner child.
 
The reason the major apps are mostly windows only is due to the market share. The reason the industry standard pro 3d apps are almost exclusively windows , (mainly because most games are exclusively windows) is all thanks to direct x. Direct x is so wonderful for developers and free because it is a Trojan horse so to speak in the larger OS war. This graphics API is the battle not the war. Direct X also allows Microsoft to pretty much crap in a paper sac and write Windows on it *cough*Vista*cough* if they are so inclined and everyone will have to buy and run it. Everyone will have to jump through their hoops even if it stinks, and as they give their paying customer , the OEMs, and the GPU vendors the back of their hand as gratitude, we all will have to keep saying thank kind sir may I please have another. It's also a stab at the console competition to their xbox360 as they have assured the windows PC (which according to them and most end users is the only PC) is effectively geared for ease of porting and development on a 360. The PS3 uses open GL as well as any 3d app or game on a MAC or Linux and anything that wishes to be multi platform since it runs on Windows as well. What happens to your content pipeline when making an opengl PS3 game if your 3D app no longer supports open GL ? Open GL is the open, cross platform standard, though it started with SGI it was not proprietary and was given over to the public as the standard. This is why it has survived the relentless, brutal assault over the course of a decade funded to the tune of hundreds of millions of dollars from Microsoft. It is subject to both the benefits in compatibility and portability and the pitfalls of open standards (disagreement and a bit of delay for everyone to settle when ever it is updated.) It is most often comparable in features though to direct x( has HLS shaders for example) and generally keeps up with the most valuable features. As Microsoft demands more and more of the resources of the GPU vender's driver development and as they continue to stifle the use of Open GL in games and 3d apps to the point of having there be few big commercial games that are open Gl on windows and now even fewer cross platform anymore, the amount of effort devoted to the windows open GL performance in drivers has surely suffered. If you are ATI or Nvidia and you just released a new windows driver what concerns you more right off the bat how well your open gl performance and features are implemented or your Direct X performance ? Developers don't have enough resources and they are happy to settle on one API unfortunately this is not now days usually the standard Open Graphics Library it seems but the Microsoft proprietary API. small market share for games on a platform, means few games developed, few games means less gamers running that platform, which results again in less development, and so the 3D apps (3dsmax) and game engines don't focus on that platforms api as heavily since no one is developing and thus even if they were so inclined developers have less robust support in high end industry standard tools and fewer options/more difficulty in developing on these non windows platforms. It's the chicken and the egg , and which came first ? Simple the chicken and it's name was Direct X. As long as Direct X is proprietary and people sill use computers that don't run Microsoft Windows open GL is going to be alive. This would only change it it was redesigned to be a cross platform open standard API (from a technical stand point probably not that feasible) like open GL but this would never happen anyway as it defeats the whole main purpose of the existence of Direct x as far as Microsoft is concerned. The other way the war will be over is when all people finally cave in (as Microsoft believes they will) to running Windows too spite it's quality or lack there off on their computer. This is already happening as Macs have had to start running windows just to keep graphics pro's and gamers able to run some the deal breaking apps and games. This is just another nail in the coffin to anything trying to compete with the MS monopoly via such silly and futile tactics as trying to offer a better more advanced or more robust product. Vista is not only a second rate OS but just another foothold on the industry. I imagine in the future streaming media (netflix instant watch and DVD/MP3 and other media producers will start pushing "requires Windows Vista or later" and if you want to access DRM'ed content the no longer supported legacy XP os will not be an option. Apple will have their own DRM offerings but maybe you need to bootcamp vista for ever more of your media needs. At this point I think MS will release true all out DRM implementation. This will be beyond Vistas current tilt bits requirement for driver certification, that use resources to poll for DRM compliance hundreds of times a second to monitor and kill your graphics or audio subsystems on a whim. yes this is why it's really so resource intensive when other OS's have surpassed Areos' eye candy for many years on a 3rd of the hardware resources) By then most every one will have more than 4 Gigs of ram and be on 64-bit windows that is where they have drawn the line of no non Microsoft approved drivers running ever. (I have seen the consequences of trying to run a competitors product like VMware and watched it disable my application and then when I got it working saw them patch it out as an exploit and give the work around of you should use Microsoft Virtual PC instead and here it's free because we are just so nice. The tie in tech they have already patented for a subscription based modular OS model in the next move I fear. The newest Windows basic/standard/home or what ever they call future versions will likely be much cheaper but to play games you will need the gamers module casual (10 hours a week) for 4$ a month or hardcore unlimited 3d acceleration for $7 a month or maybe each DX version costs you. Maybe they will let you customize your PC's functionality from a predetermined list (business user , casual , graphics pro , gamer etc.) and tell you it sure beats having to choose from 6 different versions of windows. They can lock down and enforce your hardware functionality then. So even though you paid for that nifty graphics card what good is it without the games for windows gaming module from Microsoft. Most major software vendors auto desk, adobe all want you going over to a subscription biased pricing model and why not. They can only re-invent the wheel so many times and listen to you gripe about how they added 4 minor useless features or bundled their other apps into a bloated suite then stopped selling the last version to try and get a revenue stream. Some times an app is just done works for many many years with only minor need for technology improvements and updates/new features. So they would rather have you pay once and then charge you a lower amounts continually for usage and to maintain your license. This is where Windows may be headed as well.
 
Oh I almost forgot for those asking about why is their not an open source Open GL implementation (yes Open GL is a spec) what about Mesa

Mesa is an open-source implementation of the OpenGL specification - a system for rendering interactive 3D graphics.A variety of device drivers allows Mesa to be used in many different environments ranging from software emulation to complete hardware acceleration for modern GPUs. This how they do it for many Linux apps. I think things would be in a much better state for multiplatfrom PC gaming if not for the MS monopoly. Often their tech can be inferior, not always, and DX is easy, powerful and liked by developers, but when MS drops the ball they get a free pass due to everything being proprietary windows. I don't think their offerings are always on top because they are better , maybe though sometimes they are, but mostly it's an MS offering starts on top so it get's better support or is used even if it lacks. I don't think they they have really ever had a vastly superior OS to the alternatives aka mac though Linux only recently has become a good desktop with the easy to use rock stable desktop distros like ubuntu.

Vista is such joke I dropped windows for Ubuntu and I'd been designing and supporting MS desktop clients and network solution for many years and using and gaming on Windows on my home PC. I still like the freedom to build my own system and I know MAC hardware is overpriced even though the OS is very good and mated perfectly to the hardware. Vista was just so bad they are spending 300 million on a marketing campaign whose tag line is "Vista it's really not as bad as everyone says" or "I'm a PC I like cheese" yeah I guess they can't talk about the actual OS much since it's a bloated annoying convoluted disaster. They have demoted many execs at MS over it and the title on one of the PC magazines I saw in the store yesterday read "Interview with Microsoft development- how Vista became the 600 Million Dollar failure. Not that I trust their objectivity since they just want to sell issues but I used Vista extensively and it's true . It has nothing compelling (Direct X 10 can't save it I'll play Direst X 9 games on my Xbox 360 any day, not by windows proprietary software before I'll that disgrace of an OS ever touch my hard drive or go near my data again) What Vista it dose try to do it fails at pretty miserably. That's why the best praise you hear in it's defense from it's users is "It's fine and I don't have any problems what are you complaining about just buy faster hardware" ( I do have fast hardware and it still sucks even though it was not crippling my PC unbearably) translation: "It came on my new PC and I have to use it because it's Windows and besides I don't know how this hardware should be performing on a more efficient OS anyway so..um you know it's fast I guess and ok and I like it fine."

If I was a casual PC user I'd by a Dell with Ubuntu pre installed. The difference is night and day and once I got used to the differences in using it could never go back to running my PC off Windows. For 3D or professional apps your pretty screwed I guess I'll dual boot XP and try to think of it as an lame application framework that requires a reboot rather than an OS. However after XP dies if I can purchase commercial apps or games for Ubuntu I'll line up to get my hands on them other wise I'll pass as it's not worth it. MS would have to completely change direction from produced Vista and come up with something pretty advanced sleek and exceptional to get me to buy another MS product and I don't think that's likely so It's console gaming from now on as heck the online gaming is getting there on the console with less hassle. Thanks MS so their console in my living room , but their newer software not on my PC. See I'll give them credit when it's do , as the one product works well and delivers the goods and the other dose not and is a joke.

 
What about those of us who run XP, have no need to 'upgrade' to that pretty but slow Vista but would still like to buy the latest game when it's released? To my knowledge there is no reason that DX10 / 11 CAN'T be implemented into XP, they just want to force us to spend money.

And, why does no-one comment on the DRM issues whenever Vista is mentioned?
 
Because the DRM issue isn't really an issue. I have over 60 days of music and most of that (like 95% or more) is music I received form friends. I do not have any issues with DRM at all. I have not had any issues either, so I am not worried about it.

DX10 and 11 wont be able to be implemented on XP because of some of the core files needed from Vista kernel. That is why the piping project that has been going on for some time now of trying to bring DX10 to XP hasn't worked. Please correct me if I am wrong.

Vista is not that much slower of an OS as people like to think. They think it is because they read somewhere it is. That is not first hand experience. Vista is not normally an upgrade to OS, but more of a buy a new computer OS. You can get a Dell for around $550 with 4gb of ram and a quad core. So a new computer would be to much money is no excuse anymore.
 
Vista is slower because it's needs a lot more resources to run; it's just more MS
bloatware. An avg consumer PC shouldn't _need_ 4GB RAM.

The GUI changes are just more bells & whistles that are not remotely necessary.
An OS should do its core functions well and stay the hell out of the way of how the
user wants to do everything else. Windows just gets in your face all the time. That's
why my PC is staying with XP Pro (definitely MS' peak product IMO) and my main
systems are SGIs.

Re the original subject: I can't see OGL dying. The target markets for it are too different
for that much of a collapse in usage to occur just because of what MS is doing with DX.

Ian.

 


I wouldn't be so quick to say that it is slower because of "bloatware". Or to claim its slower because it needs more resources. It is slower because it is a modern OS. XP is slower than 2000 which is slower than 98/SE which is slower than 95 which is slower than 3.## which is slower than DOS. It's a normal trend. If anything, Vista is slower because it is more secure. The firewall with Vista is much, MUCH better than XP not to mention that it puts all users in a position just like Linux.

I do agree that the OS should allow the user to do what they want, but you must understand that not all users can code C# to make customer apps for themselves to alter the OS. The OS must be made for the general public so the mass can use it. We, the 5%-10% of "elite", ahem, enthusiast users want something else but have to understand that we can not always have our cake and eat it to. The core kernels of Vista are drastically different than XP and is why DX10 is on Vista and not XP. The core kernels of Vista will be used in Microsoft’s next for OS's, so we mine as well get used to it because it will be around for a while, or so they say.

Yeah.... :??:
 
It's slower because it's a modern OS? 😀 Sorry but that's just nonsense. And XP is
not slower than 2000. I did tests that showed very distinctive speed increases,
especially for some 3D tasks.

And the idea that a firewall slows down a 'modern' OS is just false. I run a firewall
on a simple 180Mhz SGI Indy and it barely uses more than 1% CPU to do it.

Vista needs 50% more RAM than XP. Vista is definitely bloatware.

And it doesn't put users in a position like Linux. The latter gives the user proper
control over how the system works. Vista does not.

One doesn't need knowledge of C to customise how a UNIX system operates.
With Vista and other MS products, one barely has any ability to customise
things properly at all.

Modern OS... hehe, what a joke. There is simply no excuse for an OS to need
the kind of resources Vista consumes.

Ian.


 


OK, I can give you that I did not think out my reply as well as I should have.


You did tests? Where? Can I see them? Can we see them? Or are they Magical and just disappeared?


*WARNING!!! SPAZ SESSION ABOUT TO START*


Vista is not bloatware, the security features are more advanced on Vista which I believe (without any magical tests) is one reason people feel it is slower. I, on the other hand, feel like it is faster than XP for everyday run of the mill stuff. I am willing to sacrifice ~10 fps in a game for an OS that gives me greater security while operating smooth enough on modern hardware. Maybe its because once you hit 60fps in any game it really doesn’t matter to much, so who cares if you are getting 136 frames in Vista vs. 150 frames in XP (these numbers are made up and have no tie to anything but to make my point.. That I can make up magical benchmarks :kaola: , lol... sorry. Don't take this personal.). You won’t see the difference.

I would think that the reason people have trouble editing Windows is because it is closed source. But maybe that’s just me...

Maybe you should also look at the way it takes care of its resources. That may give a hint why older software may run slower on it as well. Also, for Vista to take 4GB of RAM you are talking about the 64bit version because everyone knows (thus common knowledge) that any 32bit OS (consumer) tops at 4GB. That will include cache and RAM or other forms of memory. With Vista 32, you really only need 2 GB of RAM which happens to be XP's sweet spot as well. My testing is use, not magical whatever you call it. For Vista 64, it is excepted and understood that it will take more RAM. But if you want to use the ill supported XP 64bit, be my guest.

Another point to make is the fact that RAM is historically cheap! Why complain about it when you can get 4gb of RAM for under $100. If you are in the computer world, you will be spending some major money on your system if you want to stay modern. If you don’t want to, then don't complain about it. Also, do you use Vista? If not, where do you have room to talk? If you don't like it because it’s Vista, maybe you should try Windows Mojave.


OK, I just spazed out.... Sorry about that. I will just not agree with you that Vista is a bad OS because of the amount of RAM it needs.

Oh, how much RAM was required for (MS)DOS, Windows 3.#, Windows 95, Windows 98/SE, Windows ME, Windows 2000, Windows XP, and Windows Vista. I have a feeling that you will see a trend that each OS needed less RAM than its predecessor.
 
spaztic7 writes:
> You did tests? Where? Can I see them? Can we see them? Or are they
> Magical and just disappeared?

They were done on my Dell Precision 650 with dual XEON 2.6GHz, 2GB
RAM, X1950 Pro, when moving from 2000 to XP last year. The 3DMark06
3D tests (not the CPU test) were around 28% faster with XP. I don't
have the other results to hand now.

And please, don't accuse me of lying about something like this. Learn
who you are accusing before making comments like that. 😉


> *WARNING!!! SPAZ SESSION ABOUT TO START*

😀


> Vista is not bloatware, the security features are more advanced on
> Vista which I believe (without any magical tests) is one reason
> people feel it is slower. ...

I just don't buy that some kind of security feature should
significantly slow down an OS. What is your evidence that the security
features are causing a slowdown. How exactly? Processing IP data is
very limited by the connection speed, which should hardly tickle a
modern CPU.


> ... I, on the other hand, feel like it is
> faster than XP for everyday run of the mill stuff. ...

Mind you, one can gain good speedups on any OS just with better parts,
eg. 15K SCSI instead of IDE/SATA.


> while operating smooth enough on modern hardware. Maybe its because
> once you hit 60fps in any game it really doesn't matter to much, so

Tell that to the Crysis & Clear Sky fans. 😀


> who cares if you are getting 136 frames in Vista vs. 150 frames in XP

Such a slowdown is still indicative of poor coding IMO.


> (these numbers are made up and have no tie to anything but to make my
> point.. That I can make up magical benchmarks :kaola: , lol... sorry.

Yep, I know, s'ok. :)


> You won't see the difference.

For that example, yes. Point is, the OS *is* bloating.


> Maybe you should also look at the way it takes care of its resources.

Yeah, badly. 😉


> Another point to make is the fact that RAM is historically cheap! Why
> complain about it when you can get 4gb of RAM for under $100. If you

That's an even worse argument. I've personally seen in my admin days
how this sort of issue makes for very lazy coding. Researchers used
to a particular type of SGI system architecture (VW320 PIII/500)
ended up with 3D models running 100X slower on a newer in-theory
faster modern PC (2.4GHz P4, GF4 Ti4600) because their models were
constructed without sensible texture sizes or level of detail control
(such issues don't affect performance on the VW320). Data expands to
fit the space available. If MS programmers expect users to be working
on systems with more RAM, then absolutely will allow their code to
bloat because the resources will be there. This is unnecessarily
wasteful in so many ways.


> don't complain about it. Also, do you use Vista? If not, where do you

I tried it (prerelease tester) prior to launch, thought it was awful.


> OK, I just spazed out.... Sorry about that. I will just not agree
> with you that Vista is a bad OS because of the amount of RAM it
> needs.

That's just an example. I won't say it's "bad" per se, but IMO it's
a step back from XP.


> Oh, how much RAM was required for (MS)DOS, Windows 3.#, Windows 95,
> Windows 98/SE, Windows ME, Windows 2000, Windows XP, and Windows
> Vista. I have a feeling that you will see a trend that each OS needed
> less RAM than its predecessor.

Not entirely surprising (SGI's IRIX is the same) but the degree to
which it's jumped from XP to Vista is ridiculous (50% higher minimum
recommended config).

Cheers! :)

Ian.

 


Its cool, I try not to accuse, but We still must take your word for it. Without a tangible copy of them, who is to say you didn't just make it up. This isn't a thing I say just to make you look bad, but an opportunity for you to prove your credibility. Wow... I should work for the government! :pfff:



Yes. I agree. Lets just hope with Windows 7 we will see the speed increases.



This is true again. And I speak from the use of my Computer. See my info for that.



They do. Crysis runs like a dream on my machine. The 4870x2 helps a lot at 1920x1080 with 4xAA. No chop at all!



I agree. But again, we are talking about Microsoft.



You visually won’t see a different. There is a little between 30fps and 60fps. All you see is smoothness. Above 60fps, you won’t see it. So if you can't see it, is it an issue? Please understand I am talking about my case, with my computer. I am not talking about general users at this point.



I think it does it quite smart. Have the OS control the memory and allocate who gets what. That eliminates any chance of two programs overwriting each other. Please see the IT crowed episode 1 for Mosses explanation of invalided memory.



I'm confused.... Honestly... Can you explain this a different way?



You can't say its bad now with a retail copy when you used a beta. Many changes have happened.



Lol... ok. :lol:



I think from 2000 to XP there was a great jump of RAM needed. 2000 sweet spot was around 256MB-512MB. XP's is 2GB. If we use 2000 sweet spot with 512MB and compare it to XP's sweet spot of 2GB, XP has a 400% increase (check my math).


I am glade that you didn't take me to serious. I get into issues because I know I come off really aggressive. So, thanks.
 
Sorry for the delay - ended up dealing with numerous phone calls...


> Its cool, I try not to accuse, but We still must take your word for
> it. Without a tangible copy of them, who is to say you didn't just
> make it up. ...

My word is good. Here is my main site btw:

http://www.sgidepot.co.uk/sgi.html

I've built up a particular reputation for my advice on matters SGI,
having helped hundreds of companies & individuals over the years
(Dreamworks, ILM, Hyundai, US Navy & Airforce, Univ. of Alaska, etc.)
I don't work for SGI precisely because the advice I give out is too
honest. 😀 There's a little bit of me in Star Wars II... (ILM uses
my site for staff training, or used to; doubt they have many SGIs
these days though).

I have over 200 SGI systems of my own btw.


> ... This isn't a thing I say just to make you look bad, but
> an opportunity for you to prove your credibility. ...

If you want to know what people think of me, just search for my name
on: forums.nekochan.net (the main SGI forum).


> ... Wow... I should
> work for the government!

Eek, that's a road to ruin these days. ;D


> Yes. I agree. Lets just hope with Windows 7 we will see the speed increases.

Ironically I do think Win7 will be an improvement, mainly because
otherwise it would mean they'd learned nothing from the Vista
experience and that seems unlikely. Ah well, time will tell I guess.


> They do. Crysis runs like a dream on my machine. The 4870x2 helps a
> lot at 1920x1080 with 4xAA. No chop at all!

4870x2 eh? Nice! 8) I was talking about 2560 x 1600 though, with
everything maxed. Here's the Stalker ref:

http://www.guru3d.com/article/stalker-clear-sky-graphics-card--vga-performance-roundup/5

I guess if it had perfect scaling the 4870x2 would give about 30fps at
this level. My card would manage about 5 or 6. 😀 Funny how fast
things change...

Hmm, rather strange that tomshardware only tests up to 1920 x 1200.
So what do you get with FRAPS when running Crysis at 1920x1200, 4xAA,
8xAF, Very High Quality? tomshardware's numbers say a 4870CF (which
should be much the same as a 4870x2) gives 14.5 fps with these settings:

http://www.tomshardware.co.uk/charts/gaming-graphics-charts-q3-2008/Crysis-v1-21,758.html

Hmm, I had the distinct impression from reading numerous forum sites
that the current rage for those with top-end cards like your 4870x2
is playing at 2560 x 1600. What do you run it at? I run my 8800GT at
2048 x 1536, just a dribble inbetween. :) But I don't use AA in
Oblivion (the high res combined with 16X AF and max detail settings
works better and is faster) and of course AA in Stalker isn't an
issue (again I just run it at 2K with 16X AF and all detail settings
maxed). Not installed CoD4 yet, trying to avoid opening the box lest
it devour time I ought to be spending trying to earn a living. 😀

Mind you, one thing I've noticed about review sites: I do tend to get
better results than many reviews often show for various game fps
scores; do other people find this is the case aswell? Do you? Reading
recent reviews of the GTX260 Core 216 and other cards, my 8800GT
performs significantly better than many sites' own figures suggest it
should. Dunno, maybe the RAM/mbd combo is helping, but I had the same
experience with my old Asrock mbd and X1950 Pro AGP, getting better
numbers than review sites were seeing with much more expensive mbds
(mine was only $70) and PCIe versions of the X1950. Especially wierd
given my CPU is only a 6000+, ie. 3DMark06 scores are distorted
downwards somewhat by lower CPU scores compared to Intel quad-cores
with the same gfx. Based on reviews of the 8800GT, I was delighted to
get a 3DMark06 of 11762:

http://www.sgidepot.co.uk/misc/mysystemsummary2.txt
http://service.futuremark.com/compare?3dm06=7303357

It's the SM2/SM3 scores that pleased me the most - the low CPU score
was no surprise of course.


> I agree. But again, we are talking about Microsoft.

Did you hear the tale from the former deputy MS CEO years ago? (from
his book; forget the guy's name offhand). This is back in the days of
developing Win 3.1. He came into a room where people were discussing
the woeful speed of the code that redraws onscreen GUI window panes.
He looked at the code and asked, "who wrote this sh*t??" Gates walked
out of the room. Someone said, "He did.", jerking a thumb at the
departed Gates. 😀


> There is a little between 30fps and 60fps.

You're kidding right? 😀 I notice a huge difference, but then I'm
very used to high quality displays having been involved with SGIs
and visual simulation stuff for so long. In such industries, there's
a saying:

"60Hz, 30 hurts".

😀


> ... Above 60fps, you won't see it. ...

I can, but yes, most can't. It varies enormously between people and
was one of the issues I studied for my dissertation (side effects
of playing Doom; ref Washington Post, LA Times, Seattle P.I.).
Check the back of the box for the Ultimate Doom combo edition - the
PR quote is mine. :)


> ... So if you can't see it, is it an issue?

Clearly not in terms of gaming for the majority. What I meant was, if
just the choice of OS is impacting on 3D speed in a significant way,
that *is* an issue. It can be more extreme than that though; I
remember years ago a Texaco employee saying they saw a 100% speedup
when switching from Windows to Digital UNIX on their Alpha system
(I think it was).

These days, many companies use the same consumer products for doing
proper work, not just games, and they benefit from every ounce of
extra speed they can get.


> Please see the IT crowed episode 1 for Mosses explanation of
> invalided memory.

Linux isn't immune to poor coding of course though. When I tried
Slackware on my laptop, I was most surprised at the way is was
grabbing so much RAM when first booting. Kinda slow. This was quite a
while ago, perhaps it's better now. Really should try Gentoo sometime
I suppose.


> I'm confused.... Honestly... Can you explain this a different way?

Apologies, sometimes I'm too used to what I'm referring to, forget it
might not make sense. 😀

See:

http://en.wikipedia.org/wiki/SGI_Visual_Workstation

The VW320, released in 1999, used a unique architecture in which the
system only had _one_ pool of main memory for everything (main RAM,
video, textures, etc.), ie. a UMA design like the IRIX-based O2 (SGI
called it IVC for the VW320), but with much faster 3D speed and
higher memory bandwidth. This has major advantages for certain types
of task, in particular large-scale 2D imaging (very fast 2D fill
rates), uncompressed video work, VR (urban modeling, etc.),
volumetric medical/GIS, broadcast graphics - anything that involves
lots of texture and/or an interplay between texture and video. This
meant, for example, spectacular performance for apps like Shake, or
processing large 2D images, or modeling 3D objects with huge texture
sets. The system had a max RAM of 1GB, so in theory it could provide
over 800MB for textures in 3D work, ie. just limited by main RAM
size. A central ASIC called Cobalt (the real heart of the system)
handles all main 3D functions aswell as RAM access, but the main CPU
(single or dual PII/PIII, up to max dual-PIII/1GHz) does all geometry
and lighting calculations. I never bothered making a diagram of the
VW320, but my O2 page has a diagram which conveys exactly the same
idea (IVC works in the same way):

http://www.sgidepot.co.uk/o2-block-diag-2.gif

(MRE = Memory & Rendering Engine, DE = Display Engine, IOE = I/O Engine)

The basic gfx speed (textured fill rate) is fixed (around 430M
full-featured pix/sec), while geometry/lighting speed scales with CPU
power. With the best possible dual-1GHz config, it can outperform a
GF3, which for its time was astonishing, though few upgraded to that
degree when the product was current as the costs were too high (bad
marketing, overpriced reseller model). The system has no North Bridge
or South Bridge at all. It supports NT4, Win2K and Linux, has a
dedicated PCI64 bus just for the system disk, video I/O ports
included as standard, and various other cool ideas.

Just to emphasise: when a 3D app requests a texture, in a normal PC
this data must be copied from main RAM to the gfx card, thus limiting
texture upload speed to either PCI speed or the early AGP rates as
they were back then, which btw thanks to MS not doing proper coding
in NT4 was only the same as normal PCI speed when using NT, ie.
slow. On the VW320 though, no data needs to be copied, it's already
where it needs to be (main RAM _is_ video RAM), so just pass a
pointer and that's that. Max texture upload rate from RAM to Cobalt
was thus more like 3GB/sec, very fast indeed back then.

Disadvantages are a much lower peak bandwidth between main CPU and
RAM, so it's not so good for tasks that are mainly CPU/RAM intensive,
such as number crunching, video encoding, animation rendering, etc.
Some other bad things were the use of proprietary memory which was
too expensive, and the use of 3.3V PCI which rather limited available
option cards.

The VW540 system used the same architecture, but used up to four XEON
PIII/800 CPUs instead and had a max RAM of 2GB (I have a quad-XEON
PIII/500 system with 1GB RAM, used for uncompressed video editing).
The 320 normally shipped with an IDE disk, while the 540 normally
shipped with U2W SCSI. The 540 was very popular with defense
companies btw, eg. the UK's Ministry of Defense used it for the
Tornado fighter programme. And when forces went into Bosnia, they
took with them thirteen VW320 systems (6 months before the system
officially launched) as they were by far the fastest systems
available for dealing with large 2D sat images. The 320 was the first
time I'd ever seen any PC load a 50MB 2D image in less than a second.

There were later VW systems that did not use the IVC design (230, 330
and 530), but they were just conventional PCs and were totally pointless
overpriced VIA chipset yawn-boxes.

Anyway, where I worked as the main admin back in 2000 to 2003
(www.nicve.salford.ac.uk), they were dealing with large models of
urban areas, large in the sense that the models typically had about
200MB of texture, but not really that many polys in the scene (low
tens of thousands tops). The VW320s used (a dozen of them) were
single PIII/500 or 600, couple of duals, most with 512MB RAM. Four
years after initial purchase, the more complex models researchers
were dealing with could be navigated at about 10fps. The choice was
too upgrade the CPUs/RAM (quadruple the GE speed, double the ability
to cope with lots of texture), or replace them entirely with modern
PCs. The latter path was much cheaper of course (SGI's prices were
crazy - remember what I said earlier about why I don't work for them! 😀)

So the dept. ordered half a dozen sets of parts/cases and built their
own new PCs, all GF4 Ti4600, PF/2.4, etc. On paper, waaaay faster than
the VW320s.

Trouble was, because it made no difference at all to performance, the
modelers had been using large composite textures in the models, ie.
16K x 16K pixels (50MB file), dozens of smaller textures in a single
image, sub-textures accessed during rendering simply by referring to
coordinates and width/height crop within the image. Since texture
data does not have to be copied to dedicated video RAM, there is no
speed hit at all for using this approach. Plus of course they were
luxuriating with full 32bit images every time, ie. no use of reduced
formats, masks, decals, colour maps and other techniques for reducing
texture usage. And there were no level of detail constructs - no need
when dealing with more texture doesn't affect performance.

The new PC gfx had 64MB RAM IIRC, leaving probably not more than 35MB
for textures. Apart from not being able to hold the full data set
anyway, the use of large composite textures meant insanely intense
memory thrashing, ie. constantly reloading multiple 50MB images (just
one of which couldn't fit onto the card) again and again for every
frame. So instead of the expecting minimum 5X speedup, the initial
performance was just 1 frame per _minute_. 😀 Hmm, that's actually
600X slower...

So, they had to redo the models, stop using composite textures, be
more sensible about image quality/techniques, and build in new level
of detail controls.

My point was that the use of a different kind of system had made the
designers lazy, despite warnings from me. Likewise, ever more RAM and
CPU speed available as a baseline in PCs today makes OS designers
lazy. Why code efficiently when the system will be quicker and so be
able to cope with the increased overhead? Trouble is, this hammers
users of existing systems who are forced to upgrade due to MS
policies on OS support, etc.


> You can't say its bad now with a retail copy when you used a beta.
> Many changes have happened.

In many ways yes I'm sure, but it's still the case that the
recommended RAM config for a Vista system is 50% more than for XP.


> I think from 2000 to XP there was a great jump of RAM needed. 2000

Actually I'd say 2000 was kind bad anyway. 😀 That was one policy of
mine that found favour when I was admin at NICVE: even the
secretary's basic PC had 1GB RAM so it could cope with the RAM hungry
MS office apps.


> sweet spot was around 256MB-512MB. XP's is 2GB. If we use 2000 sweet

For gaming, yes, but 1GB is plenty for XP for most tasks. My XP
laptop has 1GB and it never has an issue with RAM resources.


> spot with 512MB and compare it to XP's sweet spot of 2GB, XP has a
> 400% increase (check my math).

#include <humour.h>

Hate to point this out but 2GB over 512MB is a 300% increase, not
400%. ;D (100% more = 2x more, 200% = 3X more, 300% = 4x more)
Percentages are a PITA. :}


> I am glade that you didn't take me to serious. I get into issues
> because I know I come off really aggressive. So, thanks.

Hehe, I had the 1st ever web site on the N64; believe me, your posts
are definitely not aggressive. *grin* You should see what gets said
when 14 year olds argue about console A vs. console B... 😀😀

Ian.

 


I work in IT... starting my career. Have about a year and two months experience (I graduate college at the end of Feb 2009). I understand how things come up.



My hope for W7 is speed, more security, better stability, more compatibility, more support for 64bit, starting support for 128bit, less "DRM" like things (let us do what we want), more Google incorporation (its a dream, it'll die as one), and a sleeker, leaner gui. Although W7 is built off Vista (all the core kernels are the same).



I agree. I wish THG did higher testing and incorporated the 4870x2 in more tests. We can't have everything. I just moved down from a 24" HP 2408 monitor running at 1920x1200 to a Sony Bravio 32" at 1920x1080 and I feel that with the slight drop in resolution it gives the 4870x2 a slight upper hand. It feels like it is just low enough not to suffer from stutter/"slide show" in Crysis. I noticed that when I started using the 4870x2 that it ran Crysis a lot better at1080 vs. 1200. I guess the 120 fewer lines makes all the difference. If I can find the time to figure out Crysis benchmark thing, I will run that and let you know what I get.

About the 2x4870, that is not the same as 4870x2. ATi is making sure that the 4870x2 is staying just ahead of 2x4870. They made the statement that people who buy the 4870x2 are paying more for their setup and want to give them their money worth (not in those words... but that is it summed up). You will see normally slightly high fps on the 4870x2 vs. 2x4870. I guess the just refine the drivers a little better in favor of the 4870x2.



I am running my 4870x2 at 1920x1080, now. I try to play conservative and not go balls to the walls with a video card resolution. I feel that from that resolution up you want a higher powered video card. But I will run at the lower end so I have the option of running AA AF or anything else extra and not have to worried about getting any massive or noticeable frame drop. In WiC I averaged about 38fps at 1920x1200 with everything on high, and I mean all visual settings on as high as possible. So with the slight drop in resolution I expect to see a slight boost in fps (I just got the TV on Tuesday, had class Wednesday night so I haven’t had any real time to play games on it yet.) In Crysis, I see to much tarring without V-sync on, but with it on, it plays smooth as butter, I am even running with motion blur on! Grid is just amazing. At 1200 or 1080 it is just a beautiful game. Everything is on high as well. CoD4 ran great with my old 8800GTX at 1920x1200 but I haven’t played it since I got my 4870x2. Working full time, school full time, family and G/F on the side make no time for me 🙁.

I agree with you so much about the difference in the benchmarks I see. For synthetic marks, I find myself scratching my head of how they got such high scores in some vs. my score. But then for the in game, I feel like their scores are at least 10% behind my score. What gives with that? I think it must have to be with the slight changes we have in our systems, the motherboard. That can really make or break some benchmarks.



Lol... Well maybe with Gates leaving there will be an increase in better written code. Maybe we will see the speed increase with W7. Speculation galore!



Outstanding! At 30 frames, things run smooth but slow. At 60 things run smoother and quicker. I am a believer that you should run 60fps at 60Hz. No more, no less. Why, what frequency does the electricity run at? 60 Hz. I am in favor of ray tracing because of their cap and it just makes sense to me.



Nice, congrads on that!



I don’t know if you should compare the speed of an older Windows OS processor (CISC) to a UNIX processor (RISK). A few years back, RISK was the way to go for fast speeds, but coding stability was at risk. CISC is not the mainstream and has a higher stability than RISK. I am sure you know what I mean and understand why UNIX many have been faster because of the RISK processor (depending on the year) vs. Windows running on the CISC style. It his was the days of P3 and P4, must I say more?



Then again 256MB was enough for everyday things in 2000.



??? My mind hurts.... I looked at is as 100% would be 512MB, 200% would be 1024MB (double the amount), 1536MB would be triple the original amount, being 300%, then 2048 would be 4 times the original amount being 400%. But I can also see it as 1024MB would be a 100% increase over 512MB... yeah... to early for me to be doing math.

But that is still higher than XP to Vista in RAM increase (assuming you were in XP’s sweet spot).



We all have to go through those pains. I let people argue about who is better than what for the console market. Then I mop up with how the PC dominates consoles... at least in my mind.
 
spaztic7 writes:
> I work in IT... starting my career. Have about a year and two
> months experience (I graduate college at the end of Feb 2009). I
> understand how things come up.

Good luck with the graduation!!

I ended up being a sysadmin for 10 years in academia. Has its ups &
downs like any other place I guess, but eventually I quite as the
money wastage going on was driving me crazy.

Hope I didn't bore you with the VW320 explanation btw. If nothing
else, it as an interesting system and many people still use it.


> I agree. I wish THG did higher testing and incorporated the 4870x2 in
> more tests. We can't have everything. I just moved down from a 24" HP

toms has been getting a lot of flak lately for under-par articles.
Certainly some of their testing stuff is a bit wonky, eg. they had
reviews of the 8800GT saying it was a much better buy than most of the
9xxx range and older 8800 cards, yet now they often don't include it
when doing reviews of new products.

As to why they don't test at 2.5K, beats the heck outa me. I read
quite a few forums I don't post to and it's a common theme, hardcore
gamers playing at 2560 now. Given the target market for the most
expensive cards (people who use 3 or 4 GTX280s or 4870[x2]s in
SLI/CF), one would think testing at the max res would be a given.


> > About the 2x4870, that is not the same as 4870x2. ATi is making sure
> > that the 4870x2 is staying just ahead of 2x4870. They made the

I know, but many reviews show not that much difference between the 2
configs atm, but I expect this will change with driver updates.


> I am running my 4870x2 at 1920x1080, now. I try to play conservative
> and not go balls to the walls with a video card resolution. ...

I normally don't, but found that the visual quality for Oblivion
seemed to better with a high res and no AA, compared to a lower res
with 4X AA, and the performance was certainly a lot better with the
former approach. Same for Stalker, though for different reasons re
the 3D engine it uses.

I expect when I install CoD4 though I will be running it at a lower
res, 1600x1200 or somesuch, but with AA activated. I'll see how it
goes.


> ... I feel
> that from that resolution up you want a higher powered video card.

Certainly for the latest games, definitely.


> Everything is on high as well. CoD4 ran great with my old 8800GTX at
> 1920x1200 but I haven't played it since I got my 4870x2. ...

I expect you'd be able to up it some more with the new card.

With Oblivion/Stalker, I want to look into how to increase the visual
complexity beyond the normal maximums, eg. move grass fade-in distance
further away, that sort of thing.


> ... Working full
> time, school full time, family and G/F on the side make no time for
> me 🙁.

Yikes. 😀 It's the lot of the modern man. The kind of schedule where
all of a sudden one thinks, hey, how can it possibly be October
already??


> I agree with you so much about the difference in the benchmarks I
> see. For synthetic marks, I find myself scratching my head of how
> they got such high scores in some vs. my score. ...

I tend to see the reverse, somewhat lower scores on review sites, but
maybe this is because they don't test with versions of each card that
are particularly good, eg. testing with a default clocked 8800GT with
a 600MHz core instead of one of the factory oc'd versions that has a
700MHz core (mine runs oc'd at 790/1790/980, though only for Stalker;
Oblivion doesn't like oc'd systems).


> ... But then for the in
> game, I feel like their scores are at least 10% behind my score. What
> gives with that? ...

Yeah, ditto.


> ... I think it must have to be with the slight changes
> we have in our systems, the motherboard. That can really make or
> break some benchmarks.

Kinda peculiar when most reviers tend to use quite expensive mbds. I
was truly stunned by how well the cheap mbd I bought performed.
Indeed, not sure about now but it used to be that my previous system
with the X1950 Pro AGP and 6000+ was 6th in the 3DMark06 table for
a system with this combo of CPU and gfx. The only ones faster were
those using newer factory oc'd cards and.


> Outstanding! At 30 frames, things run smooth but slow. At 60 things
> run smoother and quicker. ...

It makes a huge difference for visual simulation applications. SGI
built tech into its IR gfx so that the frame rate never drops below
60Hz, ie. Dynamic Resolution. See this 1996 document for details
(section 7.12.6):

http://www.sgidepot.co.uk/ir_techreport.html


> ... Why, what frequency does the electricity
> run at? 60 Hz. ...

50z here. 😉 Screen refresh isn't related to power supply frequency.


> I don't know if you should compare the speed of an older Windows OS
> processor (CISC) to a UNIX processor (RISK). ...

(it's RISC btw, not RISK)

I assume you know that modern x86 CPUs are RISC at the core of the
chip? CISC instructions are converted into simpler instructions
first.

In reality, the distinction between CISC & RISC vanished long ago.
RISC chips like SGI's MIPS line became pretty complex entities, while
Intel adopted ever more techniques from the RISC world, eventually
taking on quite a few ex-SGI CPU design people.

But no, the speed increase the Texaco guys saw was said to be down
the the efficiency of the OS, not the CPU, though sometimes the huge
gulf between the two design lines definitely made a difference. Many
years ago, when NT was still supported on MIPS, SGI used to run an
advert offering an R4400 250MHz CPU upgrade (with 2MB L2, huge for
its time) for NT users which would speed up tasks by 5X compared to
typical Intel CPUs used at the time. Cost rather a lot though. 😀


> ... A few years back, RISK
> was the way to go for fast speeds, but coding stability was at risk.

I know of no evidence for this at all. What do you mean by
stability? If there were issues with some designs, it was other
things like being able to ramp up the clock (Alpha was better for
this, just as IBM's POWER is today), or running out of op codes re
the MIPS line. Bigger problem was just the higher cost.


> sure you know what I mean and understand why UNIX many have been
> faster because of the RISK processor (depending on the year) vs.
> Windows running on the CISC style. ...

For a long time, the RISC-style UNIX vendor CPUs were oodles faster
just because they were better designed, ie. even when using the same
OS on both systems. But then the UNIX products tended to aim for at
scientific/engineering markets where strong 64bit fp speed was
important (a traditional weakness of x86).

Intel caught up by exploiting their strong financial position (owning
their own chip plants, etc.), absorbing talent from other companies
(DEC, SGI, etc.), going back to the PIII core for the Core2 design
(P4 was bad) and coming up with a design which made it very easy to
clock up to ludicrous levels. A smart move.


> ??? My mind hurts.... I looked at is as 100% would be 512MB, 200%

Percentages are fun. 😀

When one talkes about a percentage _increase_, then the same amount
results from a 0% increase. When talking abnout a percentage
proportion of the original value, then the same amount results from
100% of the original amount. Easy to get tripped up on this one.


> ... to early for me to be doing math.

😀😀


> We all have to go through those pains. I let people argue about who
> is better than what for the console market. Then I mop up with how
> the PC dominates consoles... at least in my mind.

It ever ends. Sensible people just use both, or one or the other
depending on what games one likes. But for the high price, I would
have bought a PS3 (will do eventually if only for the Blue Ray
ability), though with hindsight at least for Oblivion/Stalker I'm
glad I didn't because the mod scene for PC games is way better. Plus
of course, messing around with PC overclocking is fun & interesting.
If PC gaming didn't exist, I can't imagine what all the young tech
fans would get up to in their spare time, the same kind of people who
in the 1980s were so much into the early 8bit and 16bit systems just
as I was. Consoles have that 'just turn on and play' factor, though
certainly today they've become so complex that the reliability factor
has slackened off somewhat, eg. Oblivion on PS3 has far too many
bugs. In truth we need both formats for a healthy market.

Ian.


 
 
spaztic7 writes:
> I am finding myself going to hardware canucks and anandtech more and
> more now.

One thing that particularly annoys me about tomshardware now is the
very overloaded pages in terms of waiting for advert images to
download, and also page scripts that hang or lock up the browser
until they either complete or the browser reports the script is
stalled and suggests I stop the script. Other sites are much faster
to navigate.

I discovered anandtech rather late in the day, but yes I check
anandtech now before toms. I'm not very familiar with canucks though;
how would you say it compares to anandtech? Had quick read of a
number of reviews & articles, looks quite good, and speedy too!
Others I check are 2cpu.com and hardforum.com.


> CoD4 should look great, but I wont see higher frames because I run
> with V-Sync. My 8800GTX was overkill for that game. Your card will do
> great.

Certainly looks that way! I'll probably run it around 1920x1200 with
max detail.

Btw, I notice that as is so often the case, CF or dual-GPU cards
scale very well for some games but quite poorly for others. The
4870x2 doesn't do much over a single 4870 for Crysis but it really
helps a lot for CoD4, totally spanking a single GTX280 and more
than twice as fast as a 4870 512MB:

http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/10103-evga-geforce-gtx-260-core-216-216-sp-superclocked-edition-review-16.html

I wonder what it is about Crysis that means the dual-GOU doesn't
help much? Or maybe it's just a driver issue. Personally, I strongly
suspect Crysis isn't coded very well, but without detailed performance
metrics of loading within the GPU (which modern cards are not able
to supply) there's no way of knowing.

I just wish someone would throw a bucket of cold water over the idiot
at NVIDIA who's responsible for choosing their product naming
scheme. The hideous range of cards available, and the corresponding
ever larger range of products from retail makers with yet more names,
is totally crazy, especially the 9xxx series, eg. the 9800GT is barely
any different (in some cases slightly slower) than an 8800GT:

http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/9791-geforce-9800-gt-roundup-evga-asus-gigabyte-palit-10.html
http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/9791-geforce-9800-gt-roundup-evga-asus-gigabyte-palit-16.html

Core 216 indeed... next we'll have to start asking forum sites if
they can double the max characters limit for typing in sigs with our
system descriptions. 😀


> Have you tried QTP (Qarl's Texture Pack) 3 for Oblivion? I makes all

I did but to be honest I wasn't impressed, some textures looked a bit
wierd, especially ground and rock textures, but maybe it was an issue
with the X1950. Might try the pack again as I still have the archive,
though I ought to check if there's a newer version I suppose.


> comparing standard def to high def TV. Also look for the real sky mod
> and there are some view distance mods as well.

I've been using the Natural Environments Mod which changes quite a few
things.


> That's the thing for me. I buy the high end mobos and am disappointed
> in them every time. I have the Asus Rampage Formula and I have to
> say, its one of the hardest mobo to work with. That and it doesn't OC
> that well either... with my experience with it.

I sometimes think the high-end market for these boards is more about
bragging rights than real technical differences or advantages, ie. the
boards probably don't cost that much more to make than normal boards,
but the people who buy them are willing to pay a high price for
something that's supposedly "better". One thing that's always bemused
me is why such expensive boards don't even have a single PCIX slot.
Yes, PCIe is the newer standard but if such boards really are aimed
at 'enthusiasts', aren't they the very people who want to experiment
with things like U320 SCSI, Fibre Channel, etc. for which ordinary
PCI is useless? Trouble with PCIe is, finding option cards is very
difficult, and they're expensive new. I managed to get mine 2nd-hand,
LSI (Dell) PERC 4e/DC PCIe U320 RAID card and an LSI20320IE PCIe U320
single-channel card.

So I ended up buying a 'pro' board instead, which was actually quite
a bit cheaper than I was expecting ($190 equivalent):

http://uk.asus.com/products.aspx?modelmenu=2&model=1207&l1=3&l2=82&l3=313&l4=0

And quite surprisingly when compared to many more expensive
enthusiast boards, both PCIe slots operate at 16x when used for SLI,
though not relevant to me as I'm using one of the slots for the
LSI 1-channel SCSI card.

The overclocking features are also rather impressive.


> operations. But due to the use of less code, it left room for more
> programming errors giving them a higher change of having stability
> problems. ...

Nope, that's wrong.


> The real question is was I taught right in high school about this?

Alas no. :)


> Was my Vo-Tech CIS teach correct ...

Nope.

It's a bit like writing a large application. Keep core functions
simple. Do a small number of things very well. Build up larger
functions on top of lesser ones which are reliable. That's kinda
like RISC. A more elegant instruction set makes for a leaner design.
It's ironic that modern x86 CPUs now use a RISC core, converting
the old instructions before final execution. In a way, that's a bit
like a hardware equivalent of higher level language commands (like
C or Fortan) being converted into assembler (weak analogy, but you
get what I mean I hope).

Although I always enjoyed coding in C, C++, Fortan, Ada, Cobol and
other high level languages years ago, I always gained the most
satisfaction when coding in assembler. It was very cool to see how
simple, efficient routines could build up into higher functions that
resulted in code that read like a high level language but was very
fast and efficient. My favourite project was an entire word
processing package written in 68000 (AtariST platform), the largest
2nd-year project the Computer Science dept. (www.macs.hw.ac.uk) had
ever received in terms of lines of code:

http://www.sgidepot.co.uk/misc/sorcbase.s

Learned a lot about how a good instruction sets enables efficient
applications. Following on from there to MIPS CPUs was a natural
transition when I ended up becoming more interested in CPU/gfx
architectures than programming.


> For me, there is only one console. The Nintendo Wii. It
> revolutionized the way to play games. I due use Xbox hardware (the
> controller) on my PC. It works great for the racing games!

Alas, the Wii just doesn't have the kind of games I like. The whole
interactive family stuff does nothing for me at all. I was given a
GameCube and have always had a soft-spot for Zelda (good for playing
with gf) but would not have bought one of my own accord, though Geist
is a good game. I can certainly understand the attraction of some of
the Wii titles, playing with others (lots of stuff for when you've
just come back from the pub or club with a bunch of pals, not
entirely sober perhaps, stick on Guitar Hero or Mariokart and have a
laugh), but some of the games... well, to me it looks as if Nintendo
isn't aiming at the more mature market at all now. Bit of a shame
given the N64 had cool games like Goldeneye, Body Harvest, Doom 64,
Duke Nukem, Mission Impossible, Perfect Dark, Quake/2, Hexen, Resident
Evil, the Turok titles, and so on. Very little like these for the Wii
or the GameCube.

Ian.

 
Hey, spaztic7! I was sorting through some papers, found the documents
I mentioned with some of the 2K vs. XP tests! 8) Here are the
numbers (sorry if this doesn't format very well; copy and paste
elsewhere if so, plan text web page or something)...

System: Dell Precision 650, dual-XEON 2.66GHz, 2GB DDR/266, X1950Pro
AGP 8X 512MB, U320 SCSI. Tested with Win2K SP4 and XP Pro SP2.


3DMark2003, default settings, 2K vs. XP:

Win2K WinXP

Overall: 11671 13969
GT1: 183.1 243.8
GT2: 102.2 117.9
GT3: 76.2 87.7
GT4: 76.7 95.7
CPU: 207 587
CPU1: 28.2 60.6
CPU2: 2.9 11.3
Fill/S: 3876.7 3893.3
Fill/M: 6722.3 6777.6
Vertex: 39.4 56.5
Pixel 2.0: 121.3 177.2
Ragtroll: 50.7 63.0


Notice the raw fill rates are similar, yet the separate tests show
huge differences, especially the CPU results.

Here are the numbers for the system's original Quadro 900XGL card
(didn't run all the tests for reasons that escape me now, though
GT4 wasn't supported of course):

Win2K WinXP

Overall: 1577 2004
GT1: 105.0 123.6
GT2: 10.3 13.7
GT3: 9.1 12.7
GT4: - -
CPU: 258 584
CPU1: 34.2 55.9
CPU2: 3.7 11.9
Fill/S: - -
Fill/M: - -
Vertex: 5.7 9.0
Pixel 2.0: - -
Ragtroll: 6.0 8.0


The PCMark05 sheets weren't in the same pile, but they showed the same
kind of thing.

Ian.

 
Hey Ian,

Thanks for that update. That is pretty interesting information you have there. You were able to debunk the claims that 2000 is faster than XP, so that's good.

That does make sense as there was many a optimization for gaining in XP over 2000. But my only concerns is with 2000 being designed to run dual processors. Was 2000 really optimized for that type of processing? Could that OS really run that many multiple threads?
 
Re threaded apps on 2K: well, yes and no. 2K could run more than one thread, but
from all the info I was able to gather at the time, it was rather poor and managing
the process, especially if HT was active. I found HT would slow things down by
around 12%, in some cases 50%, eg. imagine dual-XEON with HT, 2K sees
4 cores even though there are only really 2; now run 2 threads; optimum is to run 1
thread on one CPU and one on the other CPU, but in practice 2K throws both threads
at the same CPU. Also, the way cache is dealt with under 2K can be ruined by the
use of HT. Not always the case however.

Some tests showed threading scaling quite well, others poorly. It was inconsistent.
I think XP improved on how threading was managed, or must have done given
the better results.

Add in the massive speedup due to faster RAM in my later system and real-world
performance shot up as much as 500% in some cases.

http://www.sgidepot.co.uk/misc/mysystemsummary2.txt

Strangely though, I was told by one experienced programmer (ex-MoD) that MS
removed some important parallel libs from 2K when moving to XP, a change he
was most annoyed about. Don't know the details though.

Cheers! :)

Ian.

 
Hmmm both forms are non OOP. And ubber shader is just a loaded function..
A better solution would be to use an object, where each use just method that makes use of the other methods.. But plastic and metal shaders often just use reflection mapping.. The added bonus is it's easier to read and you can probably thread it better. But Microsoft avoid OO abstraction because it would make their API's deprecations too obvious.. It's why they prefer XML to adopting true object based interfacing. But coders are generally not adept enough into writing object oriented code and resort to writing impossible to read C functions. How can you tell a C programmer from one that is adept in OO, they make one object as a library of functions.

BTW, Microsoft is notorious for showing up at board meetings int he form of evangelists that attempt to pervert standards.. It was likely that those who perverted OpenGL 3 were paid Microsoft evangelists. It's also how they screwed up SGI, SGI was in bed with Microsoft trying to make an graphics card for the PC, and Microsoft used their collaboration with SGI to delay SGI while it developed DirectX. It was just to buy them time.. Note that Netscape released the sources to Netscape to prevent Microsoft from perverting the HTML standard.. That's is probably the reason that OpenGL is getting developed, it is standard and since it is an open standard, it won't die. In fact Linux will never die for the same reason. And Firefox outperforms IE.. Microsoft benefits from this anyhow, since they don't usually innovate, they just rip-off other's innovation.. Which is why open source exists.. To spread good design, because otehrwise Microsoft wouldn't have a clue, and nothing would progress..
 
Status
Not open for further replies.