nForce4 Intel Edition is good!

Mephistopheles

Distinguished
Feb 10, 2003
2,444
0
19,780
Quite a few percentage points above the traditionally high-performing i925XE mobos. Not bad.

Add to that that Intel will probably have the most accessible dual-core products on the market, and they've become a little more interesting than in the past several months...

Link: <A HREF="http://www.tomshardware.com/motherboard/20050406/index.html" target="_new">THG on nForce4 IE</A>
 
I started a thread in the <A HREF="http://forumz.tomshardware.com/hardware/modules.php?name=Forums&file=viewtopic&p=573348#573348" target="_new">Mobo forum</A>. What do you think of what I said there?

__________________________________________________
<font color=red>You're a boil on the arse of progress - don't make me squeeze you!</font color=red>
 

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
>Quite a few percentage points above the traditionally
>high-performing i925XE mobos.

Quite a few .. wow.. do the math, comparing apples to apples (so both using DDR2-533), its barely 1% on most benches, and within margin of error. Still, that is better than being worse :), good show from nvidia, but its nothing like the boost that AMD got from nForce1.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 

Spitfire_x86

Splendid
Jun 26, 2002
7,248
0
25,780
nForce2, not nForce1 ;)

------------
<font color=orange><b><A HREF="http://www.mozilla.org/products/firefox" target="_new">Rediscover the web</A></b></font color=orange>
 

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
<A HREF="http://www.tech-report.com/reviews/2005q2/nforce4-sli-intel/index.x?pg=2" target="_new">Tech report </A> paints a slight different picture. let me quote their conclusion:

he nForce4 SLI Intel Edition's performance is <b>nearly </b> on par with Intel's 925XE, and that's no minor accomplishment for a first effort at a Pentium-compatible chipset. Generally, the nForce4's ability to run DDR2 memory at 667MHz offers little performance benefit with an 800MHz front-side bus, which is by far the most common bus speed for Pentium 4 processors these days. At that bus speed, the nForce4 is slightly slower than the Intel 925XE overall. However, with a 1066MHz bus and a P4 Extreme Edition processor, the nForce4 SLI Intel Edition is able to stake a claim as the fastest Pentium 4 chipset—at least until the Intel 945/955 chipsets arrive. (my emphasis)

Further more, its interesting to see P4 gaining quite a bit less from SLI, in some cases even losing performance. Funny if you think about how some gamers will soon pay a small fortune for a dualcore EE processor, with 2 highend videocards, 64 bit cpu and OS, and RAID 0 HD stripeset, when each and everyone of these features may well *reduce* his performance.. and even a P4E with single videocard, running 32 bit windows and a single harddisk may end up considerably faster.


= The views stated herein are my personal views, and not necessarily the views of my wife. =
 

slvr_phoenix

Splendid
Dec 31, 2007
6,223
1
25,780
its interesting to see P4 gaining quite a bit less from SLI
I assume you mean compared to an A64? That's really no surprise. Despite the popular misconception, DX and OGL require an awful lot of work from the CPU to set up data, and require an awful lot from the memory architecture to load textures. AMD's platform has a CPU that performs better <i>and</i> has a much lower latency memory architecture.

Funny if you think about how some gamers will soon pay a small fortune for a dualcore EE processor, with 2 highend videocards, 64 bit cpu and OS, and RAID 0 HD stripeset, when each and everyone of these features may well *reduce* his performance.
For video games that don't support SLI and don't do a lot of disk access. Yes. It is funny. But then, people dropping that much money into a system aren't looking towards the past. They're looking towards the future, where these features will (eventually) make a significant difference. (It's still funny though.)

<pre> :eek: <font color=purple>I express to you a hex value 84 with my ten binary 'digits'. :eek: </font color=purple></pre><p>@ 185K -> 200,000 miles or bust!
 

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
>I assume you mean compared to an A64? That's really no
>surprise. Despite the popular misconception, DX and OGL
>require an awful lot of work from the CPU to set up data,
>and require an awful lot from the memory architecture to
>load textures. AMD's platform has a CPU that performs better
> and has a much lower latency memory architecture.

Nonsense. Yes, AMD has a better architecture for gaming, but in no way would this lead to any expectation that SLI would provide less benefit (or even a penalty) on the P4 platform than for A64. Would you expect a new single videocard to produce such a signifant smaller performance boost on P4 than on A64 perhaps ? Would you expect the next ATI/nVidia card to perform a lot faster on AMD but the same or *slower* on intel for those reasons ? SLI is not that different from a faster GPU if you look at it from the CPU side, so saying you expected this because AMD is faster, is horseshit.

I suspect the real reason has more to do with some I/O bandwith/low latency of the PCI-X / HT interconnect on AMDs architecture, whereas Intels FSB could be a bottleneck. Either that, or an implementation problem on nVidia's chipset.

BTW, loading textures from RAM to the videocard, I don't believe the memory controller is in any way a bottleneck, especially since they are loaded only so rarely, and considering the slow bus speed. Transferring geometry data, I can fully believe bus <i>latency</i> is a potential bottleneck, but even there not the bandwith.

>They're looking towards the future, where these features
>will (eventually) make a significant difference.

Dual core ? not any time soon. SLI ? For games or platforms where it doesn't help today, what makes you think it will tomorrow ? SLI is pretty much transparent from an application POV. 64 bit ? Possibly. RAID-0 ? If his raid controller sucks, like most do today so that raid-0 gives as much performance penalty as it gives a boost, well, that won't change tomorrow either.


= The views stated herein are my personal views, and not necessarily the views of my wife. =
 

mozzartusm

Splendid
Sep 17, 2004
4,693
0
22,780
Quite a few .. wow.. do the math, comparing apples to apples (so both using DDR2-533), its barely 1% on most benches
I dont know what the technical differences are between DDR and DDR2. How do you compare the two?

Intel P4 550(3.4)<font color=red>@4.2 posted 4.8</font color=red>
ASUS P5AD2-E-Prem
Ballistix PC2 5300@<font color=red>DDR2 780</font color=red>
ATI Radeon X800XL <font color=red>459/609</font color=red>
TT 680W PSU
<P ID="edit"><FONT SIZE=-1><EM>Edited by mozzartusm on 04/09/05 11:30 PM.</EM></FONT></P>
 

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
>I dont the technical differences between DDR and DDR2. How do
> you compare the two?

All P4 machines tested used DDR2, so there is no real performance comparison between DDR and DDR2 here. Only nForce officially supports DDR2-667, but there is not much gain (at least not with a 800 Mhz FSB).

As to how DDR1 compares with DDR2, simply put, DDR2 supports higher clockrates, therefore, higher memory bandwith, but at the expense of higher latencies. End result is more or less that DDR-400 is quite similar to DDR2-533 performance, though this may vary a bit per application and even per platform.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 

Mephistopheles

Distinguished
Feb 10, 2003
2,444
0
19,780
What is truly puzzling is why the hell they don't upgrade their traditional FSB system architecture. Heck, with DDR2-667 they will have, to counter the added latencies, a grand total of 10.6GB/s, which is a lot! And that will go completely to waste.

If AMD were to implement an ODMC for DDR2-667 right now, it would be a much better beast, because the bandwidth would actually serve a purpose!

Why didn't they get all 6xx series processors to have 1066Mhz FSB? That beats the hell out of me. It's been a long while since P4Cs, and we're still using 800Mhz FSB...

Why didn't they get Smithfield to use a 1333Mhz FSB? That beats the hell out of me too. I mean, the platform - i945/955 - will use DDR2-667. What happened to running FSB and memory synchronously?

Heck, why don't they actually implement a serial bus for CPU-chipset communication altogether? Are they truly going to wait all the way through to 2007, where the three magical letters "CSI" will replace the traditional "FSB"? I mean, replacement is great, but.... 200<b><i><font color=red>7</i></font color=red></b>??? WTF?
 

Mephistopheles

Distinguished
Feb 10, 2003
2,444
0
19,780
One more bit of info:
<A HREF="http://www.xbitlabs.com/news/memory/display/20050408011945.html" target="_new">Corsair Leapfrogs DDR2 SDRAM to 800MHz</A>

Now we've got:
DDR2-800 @ 5-5-5-15
DDR2-667 @ <b>3-2-2-8</b>

Rather good timings for DDR2-667, and DDR2-800 OK to go. But where the hell is an opportunity worthy of that memory? Heck, a single channel of DDR2-800 is enough for Intel's 800Mhz FSB to be completely OK!

Why doesn't Intel spend a little more time tweaking Smithfield for a <i>late</i> 2Q05 introduction, but with higher FSB or something, instead of rushing it out the door with no teeth? Just to try and save face?

<P ID="edit"><FONT SIZE=-1><EM>Edited by Mephistopheles on 04/09/05 04:26 PM.</EM></FONT></P>
 

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
>What is truly puzzling is why the hell they don't upgrade
>their traditional FSB system architecture

They can do two things, and neither of them is easy:
1) up the frequency. But contrary to popular believe, its not because many P4 and motherboards can be overclocked 'stably' to 1066 Mhz FSB, that it means its easy to just roll it out. In fact, it seems Prescott has a rather hard time running at 1066 MHz, which seems to be the reason 1066 was canned for anything but the EE. Its also pretty hard to design affordable motherboards that can run 110% reliably at such a high frequency with a traditional bus.

2) Do some AMD style ODMC and HTT. Again, not quite something you can implement overnight. AMD has been working on both since what, 1998 or so ? Intel had decided not go this route for several reasons, like the Timna failure, the poblems this approach poses for high end servers (memory capacity versus speed), for which they invented FB Dimm instead. They have more or less recently changed their mind though, and made it public with the CSI announcement, but 2007 to me proves they have been working on it for some time, otherwise you wouldn't likely see it this decade.

IMHO, if you go back ~5 years, intel betted on IA64 to have mostly replaced (highend) x86 by this time. It wouldn't make much sense to invest huge sums in a dying architecture. Thats the same reason I believe P4 doesn't have a real successor, only multiple "recycled" cores on a die, with an aging bus infrastructure.

The moment they decided to implement EM64T is the moment they realized IA64 was not going to succeed (as a x86 replacement), and around that time, they probably shifted x86 development in overdrive. Results of that (like CSI) will only come in a couple of years, will be interesting to see how it pans out. But for for the next year or two, I wouldn't expect huge leaps, only relative simple ad hoc improvements (not to say damage control) like multicore and bigger caches..

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 

Mephistopheles

Distinguished
Feb 10, 2003
2,444
0
19,780
What's must frustrating is that they poured like billions into IA64, and it will all be for nothing. I mean, had they realized their missteps before, they could have invested that pathetic amount of resources into making x86 a better architecture as a whole. They have a lot of resources. It's just sad to see them waste so much of them.

Good thing, then, that they're making course corrections right now. They desperately needed them.
 

mozzartusm

Splendid
Sep 17, 2004
4,693
0
22,780
Im not sure about the EE CPU's but my 550 runs very stable at 4.2Ghz. The best performance seems to be 3.9 - 4.0Ghz. That puts my FSB well over 1100.

I keep reading all the negative things that are said about DDR2 but isnt the end result"performance" what really counts? Im not picking a fight, im just trying to understand. I know that the timmings are higher on DDR2, but the performance that I get out of mine is far ahead of DDR. I paid $196.00 for 2X256 Crucial Ballistix PC2 5300 RAM. I run it at 4-4-4-6-4 all the way up to DDR2 780 and it blows away the best DDR scores that I have seen. I see the difference in real applications as well as benchmarks. The diiference in my systems folding is huge.

Intel P4 550(3.4)<font color=red>@4.2 posted 4.8</font color=red>
ASUS P5AD2-E-Prem
Ballistix PC2 5300@<font color=red>DDR2 780</font color=red>
ATI Radeon X800XL <font color=red>459/609</font color=red>
TT 680W PSU
 

mozzartusm

Splendid
Sep 17, 2004
4,693
0
22,780
No, its quite compared to most setups. It has a front and rear 120mm fan and the two fans in the PSU. The PSU is very quiet. I have both of the 120mm fans bringing air into the case. The PSU and the Honeycomb side panel act as the ventilation. I had to go overboard and setup a 2 pump liquid cooling system because of the fact that the Northbridge gets as hot as the CPU heatsink on my 3.0E P4. Beleive it or not the system stays very cool. High 20's to low 30's for the CPU and Northbridge. When I push the X800XL hard I drop it down around 12C-15C. Here in Mississippi when I hit 16C during a hard benchmark the card will reset.

The Waterblock on my CPU is one that I made myself. Its not pretty, but its very effecient. All copper and weighs a ton :lol: The waterblocks ofor the Northbridge and the Video card are Koolance Waterblocks. The video card waterblock is one of their newest, it folds over both sides so the memory gets cooled also. I fell asleep last night during a benchmark and somehow managed to knock over my resivoir for the video card so it ran all night without liquid going to the video card. I couldnt beleive it but the Koolance waterblock without any liquid kept the card cool. Maybe not cool, but it wasnt hot either.


Intel P4 550(3.4)<font color=red>@4.2 posted 4.8</font color=red>
ASUS P5AD2-E-Prem
Ballistix PC2 5300@<font color=red>DDR2 780</font color=red>
ATI Radeon X800XL <font color=red>459/609</font color=red>
TT 680W PSU
 

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
and people have been running P4s at over 4 GHz for years. what the hell are they waiting for ?

Maybe if you had a clue about the complexities involved in production and validition of such products, you'd understand.

Just as a small hint, if you would have purchased a P3 1.13 GHz some years back, I'm most certain you would have claimed it ran completely stable. In fact, you would probably have overclocked it to 1.3 GHz and say its 100% stable. So why did intel not release the 1.3 and even recalled the 1.13 ? They must be retarded I guess...

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 

slvr_phoenix

Splendid
Dec 31, 2007
6,223
1
25,780
Nonsense. Yes, AMD has a better architecture for gaming, but in no way would this lead to any expectation that SLI would provide less benefit (or even a penalty) on the P4 platform than for A64.
Actually, you're wrong. You're clearly not a 3D programmer. If you double the computation and data throughput (AKA SLI) then you double the performance hit from the CPU/memory controller's involvement.

Would you expect a new single videocard to produce such a signifant smaller performance boost on P4 than on A64 perhaps ?
Yes, I would, and <i>especially</i> for a DX game.

Would you expect the next ATI/nVidia card to perform a lot faster on AMD but the same or *slower* on intel for those reasons ?
A lot? No. <i>Some</i>? Definately.

SLI is not that different from a faster GPU if you look at it from the CPU side, so saying you expected this because AMD is faster, is horseshit.
You're more or less right about the faster GPU comparison. However you're wrong about it being 'horseshit'. It's the truth. Any good DX programmer should know this quite well. While ideally the GPU should be doing everything, the reality is that the CPU has to pre-process a lot of data to feed it to the DX/OGL layer efficiently. The more efficient the CPU, the less performance hit this incurs.

I suspect the real reason has more to do with some I/O bandwith/low latency of the PCI-X / HT interconnect on AMDs architecture, whereas Intels FSB could be a bottleneck. Either that, or an implementation problem on nVidia's chipset.
Suspect all you like, but it doesn't change the simple fact that the CPU is responsible for a lot more of the graphics engine than most people realize.

Dual core ? not any time soon. SLI ? For games or platforms where it doesn't help today, what makes you think it will tomorrow ?
What part of "towards the future" don't you understand? You really need to work on your reading context problem. "Towards the future" is farther ahead than "anytime soon". "Towards the future" has nothing to do with applications from "today", nor from yesterday. Get with the program.

<pre> :eek: <font color=purple>I express to you a hex value 84 with my ten binary 'digits'. :eek: </font color=purple></pre><p>@ 185K -> 200,000 miles or bust!
 

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
>Actually, you're wrong. You're clearly not a 3D programmer.
>If you double the computation and data throughput (AKA SLI)
>then you double the performance hit from the CPU/memory
>controller's involvement.

what a load of BS; If you are a "3D programmer" you must be the most clueless one I've seen. If you double the processing speed of a one part (the GPU) its obviously true any other bottlenecks (cpu, memory, bus,..) will ensure you will have diminishing results from your GPU speed doubling. We see this every day, its called sublinear scaling. But you would not see a slowdown over the half speed gpu UNLESS you add another bottleneck !

Since you don't change a damn thing about the memory controllers, it doesn't matter just how much of a bottleneck they are, they are *not* the reason you are seeing a slowdown. If the graphic apps where 100% utterly bottlenecked by the MC, doubling the GPU speed would simply result in identical performance, not worse performance. Further more, this clearly is not the case anyhow, as faster cpu clock speeds and faster GPUs clearly increase 3D performance (with the very same memory controllers), only, as almost always, sublinearly. But never a negative increase. Are you trolling, or jus stupid ?

>> Would you expect the next ATI/nVidia card to perform a lot
>>faster on AMD but the same or *slower* on intel for those
>>reasons ?

>A lot? No. Some? Definately.

More nonsense. Can you give just *one* example of where otherwise identical videocards, only one with higher clocked memory or GPU performs *worse* on a given cpu/platform ? Because, basically, that is what we are seeing here with SLI on P4, and what you where magically "expecting". Truth is, the only sensible explanation is that a new bottleneck was added. Could be a driver or implementation issue, but clearly it doesn't have a damn thing to do with memory controllers, and was not to be expected unless you have a chrystal ball or have inside info from nVidia. Period.

>> Dual core ? not any time soon. SLI ? For games or
>>platforms where it doesn't help today, what makes you think
>> it will tomorrow ?

>What part of "towards the future" don't you understand? You
>really need to work on your reading context problem.
>"Towards the future" is farther ahead than "anytime soon".
>"Towards the future" has nothing to do with applications
>from "today", nor from yesterday. Get with the program.

You can extend this future as far as you like, but like "any good DX programmer should know", you really don't need any special kind of code to benefit from SLI. So if there is a performance hit today, "future" apps will not magically solve this. Future hardware implementations might, but the poor guy that should spent big bucks on his nVidia for intel SLI setup will never benefit from that. Future drivers could cure it is well, if that is where the new bottleneck lies, but I'm curious how that will fit with your "expectations" of reduced speed in SLI on P4 ?

But hey, keep arguing, I know how much you love to argue over <i>anything</i> even you'd have to defend an impossible, silly statement you made earlier. I'm sure at least some people will think you're pretty clever because of it.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 

slvr_phoenix

Splendid
Dec 31, 2007
6,223
1
25,780
Are you trolling, or jus stupid ?
Neither. <i>You're</i> just completely misunderstanding. I'm not talking about a typical bottleneck or linear scaling. I'm talking about the CPU having to perform tasks prior to an app sending those tasks to the GPU. The more data and tasks sent to the GPU (what people will generally do when they have a better GPU is increase the size and/or quality settings) the more the CPU has to pre-process for the GPU. OGL does less CPU legwork than DX, but then DX has more features that more than make up for this.

Likewise, to a much lesser extent, the more the GPU has to access system memory for things like textures, bump maps, light maps, etc., the more the memory controller is involved in a latency hit. So the faster the memory controller, the better the GPU's performance because the lower the latency. Usually in games this latency is only hit once because a level load crams as much as it can into the GPU's memory at once and games try to optimize the data size of these. Not all 3D apps however are games, and thus are not as optimizable.

The fact that you can't comprehend something so simple is really quite sad. One might wonder if one is intentionally misunderstanding just to argue.

You can extend this future as far as you like, but like "any good DX programmer should know", you really don't need any special kind of code to benefit from SLI.
You're <i>clearly</i> not any good DX programmer, because even here you're completely wrong. Completely ignoring the talks of future API extensions to DX to allow more manual control and usage of SLI, there's the current existing limitation that if your driver <i>software</i> doesn't specifically tell the SLI setup how to run the app in SLI then the app doesn't benefit from SLI <i>at all</i>. So you clearly <i>do</i> need special code in the drivers to benefit from SLI.

In <i>the future</i> more games will be added to the drivers. Maybe some kind soul will even add in <i>old</i> games to those drivers so that SLI owners can even see benefits on something as ancient as Quake 1. Not that this is probable, but it is certainly <i>possible</i> that some time <i>in the future</i> this could happen.

<pre> :eek: <font color=purple>I express to you a hex value 84 with my ten binary 'digits'. :eek: </font color=purple></pre><p>@ 185K -> 200,000 miles or bust!
 

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
>Likewise, to a much lesser extent, the more the GPU has to
>access system memory for things like textures, bump maps,
>light maps, etc., the more the memory controller is involved
>in a latency hit. So the faster the memory controller, the
>better the GPU's performance because the lower the latency.

Yes and one and one makes two, <b>but you still have not explained just how you magically excpected a performance *decrease* on the P4 using SLI,</b> let alone why the slightly worse memory performance of P4 versus A64 would somehow explain this (or have anything to do with it, for that matter). All your arguments simply explain sublinear scaling, but none explain performance degradation.

To degrade performance when increasing the speed of one component (GPU throughput in this case), there is only ONE possible explanation: a new bottleneck has been added, period. Too hard for you ?

Now one can guess what that bottleneck is (my guess is driver or chipset implementation), but clearly it can not be something that has NOT changed (memory controller) between one and two videocards, let alone someone would have "expected" this behaviour because of this invalid reasoning.

>You're clearly not any good DX programmer, because even here
> you're completely wrong. Completely ignoring the talks of
>future API extensions to DX to allow more manual control and
> usage of SLI, there's the current existing limitation that
>if your driver software doesn't specifically tell the SLI
>setup how to run the app in SLI then the app doesn't benefit
> from SLI at all. So you clearly do need special code in the
> drivers to benefit from SLI.

More horseshit. Do you consider yourself a good DX programmer ? Allow me to laugh when you do.

This is what I claimed: "SLI is pretty much transparent from an application POV. " This is what nVidia programmers guide states: "In SLI Multi-GPU mode the driver configures both boards as a single device: <b>the two GPUs look like a single logical device to all graphics applications.</b>".

How much transparent can it get ? You don't <i>need </i> to do <i>anything</i> to benefit from SLI (though you can tweak to get maximum performance).

What you are vaguely referring to, is the three modes the <b>drivers</b> supports: compatibility, SFR and AFR. Its not the app that decides, its the driver, and you can manually override it in the drivers with any existing app, pretty much like you can set AA settings in the app, or overrule it in the driver, and just like for AA, you don't need to change a line of code of your game to make use of it, though it can help if you do. So there is more BS from our 3D programming expert, why don't you read the optimization guides instead of continually posting invalid statements and then hopelessly trying to defend it vigorously with even more BS and 10 page long pointless posts.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
edit: I misread your second paragraph on SLI, you seem to be roughly saying what I said, only that this again in no way explains what we are seeing here, a performance DEGRADATION which you "expected". If the driver doesn't set the appropriate SLI mode, you can set it manually, and if you don't, it ought to work just as fast as a single card, NOT (considerably) SLOWER, and most certainly not slower on platform A and faster on platform B.

= The views stated herein are my personal views, and not necessarily the views of my wife. =<P ID="edit"><FONT SIZE=-1><EM>Edited by P4man on 04/12/05 12:31 PM.</EM></FONT></P>
 

slvr_phoenix

Splendid
Dec 31, 2007
6,223
1
25,780
Yes and one and one makes two, but you still have not explained just how you magically excpected a performance *decrease* on the P4 using SLI
Are you really this dense? AMD's CPUs have better FP. AMD's CPUs are running the DX code faster than Intel's. To say the same inversely, Intel CPUs have a performance decrease when compared to AMD's.

Using SLI allows for better quality settings and larger size settings. Higher settings result in a higher DX CPU usage. This higher DX CPU usage combined with AMD's better performance in DX results in AMD platforms running SLI better than Intel's. It's <explitive never entered> simple.

let alone why the slightly worse memory performance of P4 versus A64 would somehow explain this (or have anything to do with it, for that matter).
Just as <explitive never entered> simple as the above, is the same here. AMD's memory system has a lower latency and identical bandwidth. So CPU usage of memory runs faster on AMD, so DX runs faster on AMD, and likewise GPU usage of <i>system</i> memory runs faster on AMD.

Or do you just think that AMD's better performance in so many games comes from magic fairy dust and prune juice?

Do you consider yourself a good DX programmer ? Allow me to laugh when you do.
Sorry to disappoint, but actually I'm a Python\Qt programmer that does some 3D rendering in OpenGL for multiplatform compatability. While I'd prefer to use DX, Linux ain't got none. I try to keep up with the technology at home all the same, but I don't think I'd call myself a DX programmer yet since I haven't made any money (or made anyone happy) doing it. I'm an OGL guy.

edit: I misread your second paragraph on SLI, you seem to be roughly saying what I said, only that this again in no way explains what we are seeing here, a performance DEGRADATION which you "expected".
Hello? Who ever said that the performance degradation that I expected isn't a simple factor of sublinear scaling between two differing platforms where one performs a DX task better than the other? DX performs worse on Intel than on AMD. So pushing DX further using SLI results in an <i>even worse</i> difference between AMD's and Intel's performance in DX. It's <explitive never entered> simple.

and if you don't, it ought to work just as fast as a single card, NOT (considerably) SLOWER
You have a real problem understanding what you read, don't you? I never even so much as suggested that without driver support you'd have anything worse than single-card performance from your SLI system.

You're seem to be all hopped up on some notion that any performance difference I express is some gigantic thing, when the truth is that I never even so much as implied that. The expected Intel vs. AMD performance hit on SLI is small. It'll only get worse as things advance, but it's still small. Not that the performance difference race isn't won by small. And the performance hit of a driver not supporting your game is dropping to single-card. I never said otherwise. Why do you keep trying to make a mountain out of a molehill?

(And as a side note, for someone who complains so much about dualcores running single-threaded apps like crap, I don't know how you can simultaneously take a stance that using only one card in an SLI system isn't considerably slower. Can't you even be consistent?)

<pre> :eek: <font color=purple>I express to you a hex value 84 with my ten binary 'digits'. :eek: </font color=purple></pre><p>@ 185K -> 200,000 miles or bust!