nForce4 Intel Edition is good!

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
>You have a real problem understanding what you read, don't
>you? I never even so much as suggested that without driver
>support you'd have anything worse than single-card
>performance from your SLI system.

FFS, did you even read the article I quoted ? The Intel SLI platform has some issues where a SINGLE videocard setup outperforms an otherwise identical DUAL SLI videocard on the SAME platform, SAME chipset, SAME cpu, SAME benchmark and by an non trivial ammount. Christ, thats what I've been arguing all along, there has to be some *other* bottleneck.

What you say could indeed explain performance scaling differences between A64 and P4, but not the SLI negative scaling seen in that review (and frankly, not to that extent either). Hence, my comment: there is an issue with P4/nForce SLI performance somewhere (chipset, driver, whatever), and that is not something anyone could have "predicted".

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
>And as a side note, for someone who complains so much about
>dualcores running single-threaded apps like crap, I don't
>know how you can simultaneously take a stance that using only
> one card in an SLI system isn't considerably slower. Can't
>you even be consistent?

Hu ? To take advantage of dual core, you need new software, and not even just a simple recompile. Single threaded (cpu limited) apps need to be redone from scratch, which will take years IF they get redone, and IF they ever can benefit from TLP. And while we get there, we also get reduced single threaded performance from DC chips as well (per $, per W, per mm² and in absolute terms).

With SLI you (should) get an instant performance boost with any GPU limited 3D app, even a 10 year old one. I'm not saying its the worth the money or anything, but there is a huge difference there.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 

slvr_phoenix

Splendid
Dec 31, 2007
6,223
1
25,780
>Can't you even be consistent?

Hu ? To take advantage of dual core, you need new software, and not even just a simple recompile.
...
With SLI you (should) get an instant performance boost with any GPU limited 3D app, even a 10 year old one. I'm not saying its the worth the money or anything, but there is a huge difference there.
You have real context issues, you know that don't you? No one was arguing the short or long term merits or the difference between a dualie CPU and a dualie GPU setup.

I was merely pointing out that in one case you're saying that running only one of two processors is slow, but in another case you're saying that running only one of two processors is isn't considerably slower. Neither rewriting code nor cost were ever involved in the observation.

<pre> :eek: <font color=purple>I express to you a hex value 84 with my ten binary 'digits'. :eek: </font color=purple></pre><p>@ 185K -> 200,000 miles or bust!
 

slvr_phoenix

Splendid
Dec 31, 2007
6,223
1
25,780
FFS, did you even read the article I quoted ?
No. Would there be a reason to?

The Intel SLI platform has some issues where a SINGLE videocard setup outperforms an otherwise identical DUAL SLI videocard on the SAME platform, SAME chipset, SAME cpu, SAME benchmark and by an non trivial ammount.
And you think that's new? A lot of reviews are finding the same thing happening, both on Intel and AMD platforms. The general consensus is that in some cases SLI just goofs up. It's as simple as that. There's a bug in the system. Go figure, it being cutting-edge brand-new tech and all. :\

and that is not something anyone could have "predicted".
Boy oh boy. Now you're telling me that bugs can't be predicted to be in brand new first-version hardware! What next? Bugs can't be predicted to be in new software either? **ROFL** Even if I <i>had</i> been talking about that, which I hadn't, it's still incredibly predictable that something bad could happen on untested software when you toy with the hardware like that.

Come on. The way that some 3D engines do weird things to try and optimize it's no wonder that on occasion this causes a clash that SLI has a problem figuring out how to handle. It's probably from inter-card communications as it has to copy a lot from one memory system to the other or something. That's why non-supported games default to run with just one card. It's the safest way to run.

<pre> :eek: <font color=purple>I express to you a hex value 84 with my ten binary 'digits'. :eek: </font color=purple></pre><p>@ 185K -> 200,000 miles or bust!
 

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
>I was merely pointing out that in one case you're saying that
> running only one of two processors is slow,

No. <i>Thermal limitations</i> ensure these processors have to be clocked lower than otherwise equivalent single core variants. Im sure have noticed Smithfield being capped at 3.2 GHZ ? (3.4 for the ultra expensive). 3.2 GHz is "slow" and sucks compared to 3.8 GHz for less money and less heat, again for ST code like games. Clear enough now ?

As for SLI, an SLI setup is quite a bit more expensive than single, but at least should not show slowdowns at all compared to single GPU, and should provide a benefit for nearly *every* GPU limited app. F*cking big difference.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
>And you think that's new?

No, I was just pointing out this clearly indicates a problem other than Intels memory controller which you threw in, and that this was not simply "to be expected". Thats like saying its to be expected upcoming Xeons with CSI will sometimes run considerably slower when you add a second cpu, because its "new tech" and all that. Do you expect that, perhaps ?

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 

slvr_phoenix

Splendid
Dec 31, 2007
6,223
1
25,780
3.2 GHz is "slow" and sucks compared to 3.8 GHz
Geeze. I was trying to be kind. Now you're saying that 3.2GHz is slow compared to 3.8GHz, but one proc of a dual system (CPU or GPU) isn't slow. **ROFL** Nice.

And yet again you evince a complete disregard of context as neither money nor heat were relevant to the observation. Something isn't slow because it costs more. It's slow because of performance, period. Cost is a completely different topic.

As for SLI, an SLI setup is quite a bit more expensive than single, but at least should not show slowdowns at all compared to single GPU, and should provide a benefit for nearly *every* GPU limited app. F*cking big difference.
Two procs through dualcore (or even dualie mobo) will also be quite a bit more expensive than single, will likely show the same amount of slowdowns compared to single GPU (in both cases there <i>will</i> be rare instances of this), and will provide a benefit for nearly every user by at the absolute minimum moving all other tasks onto a different CPU than the one running the main thread.

To someone who isn't blinded by their own opinion there is no real difference.

<pre> :eek: <font color=purple>I express to you a hex value 84 with my ten binary 'digits'. :eek: </font color=purple></pre><p>@ 185K -> 200,000 miles or bust!
 

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
You still don't get it do you ? If I add a second GPU, the first one won't become any slower. If Intel or AMD add a second core to the package, the entire chip, both cores have to be clocked slower to stay within workable power limits.

<b>That</b> is what makes dual cores "suck" at ST code, like games, not the fact one core is there iddling, I could not care less, so many parts of cpu are iddling all the time, but the fact that 2x the ammount of cores results in *slower* performance overall on ST code. The end result matters, and for DC as it is, that means *lower* performance for ST like and games.

Only when either intel or AMD manage the power consumption to the point where both SC and DC can clock just as high, then I'll stop bitching about this and will it become a non issue, because you will not have to choose between best possible ST performance or best possible MT, as you will have both.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 

slvr_phoenix

Splendid
Dec 31, 2007
6,223
1
25,780
and that this was not simply "to be expected". Thats like saying its to be expected upcoming Xeons with CSI will sometimes run considerably slower when you add a second cpu, because its "new tech" and all that. Do you expect that, perhaps ?
Actually, yes, I do expect that there is a possability for that and/or other bugs. I expect such out of <i>every</i> new tech. Usually there <i>are</i> bugs. It's just a question of how impacting those bugs actually are. But I almost never buy any first version of software or hardware for that reason. I generally wait for at least one revision so that the initial bugs are solved. Humans are fallable. As such so are their works.

<pre> :eek: <font color=purple>I express to you a hex value 84 with my ten binary 'digits'. :eek: </font color=purple></pre><p>@ 185K -> 200,000 miles or bust!
 

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
Ah yes, but then anything is possible, so everything is to be expected, and as such I will not be surprised by a thing. If intel goes bankrupt next month, you will not be surprised, will you ? Nah, in fact you saw it coming. You're a true genius SLVR, now if only you learned some basic math..

To the point though, its funny how you changed your claims: first you said you where not surprised because of the difference in architecture and memory controller, and like every good DX programmer knows, blah blah blah; when that clearly did not explain the results, you changed the claim to that you're not surprised because its a bug, and bugs are bound to happen..

Guess what Einstein, <b>that was my point</b>: its a bug/issue with nVidia's nForce SLI for intel solution, not something logically resulting from different CPU architectures or ODMCs as you claimed, and therefore "to be expected to happen on intels nForce".

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 

slvr_phoenix

Splendid
Dec 31, 2007
6,223
1
25,780
You still don't get it do you ?
No, I still don't agre with your opinion. And I probably never will. That's the fun of individuality. We don't have to agree on everything.

If I add a second GPU, the first one won't become any slower.
That's entirely debatable in and of itself. In theory it won't be so long as you force the graphics to run as a single card system, but there's already a halving of the bandwidth (which luckily isn't fully used at this point, so it's not a serious concern), and there already <i>have</i> been observed instances of running full SLI without manual control of the operation <i>sometimes</i> the performance <i>is</i> lower.

If Intel or AMD add a second core to the package, the entire chip, both cores have to be clocked slower to stay within workable power limits.
This is also debatable. They are doing this because it's easy and will make them more money by giving them a lot more room to advance their speed afterwords. They don't necessarily <i>have</i> to do this. Intel, in all of their insanity, could yet again up the power and cooling specs for DC systems to release it at full speed <i>if</i> they wanted to. <i>And</i> I bet that serious OCers will prove theoretical headrooms for SC and DC on the same process that both exceed the highest SC stock clockspeed.

That is what makes dual cores "suck" at ST code
In your opinion. My opinion however looks at more facts to get a 'bigger picture' feeling and makes yours look, well, silly.

1) The <i>undiluted</i> performance loss from a slightly lower clock speed is still only a few percentage at absolute worst. Oh darn. <sarcasm><i>Three less FPS. God, that's </i>so<i> crap.</i></sarcasm>

2) For someone who says "The end result matters" you don't really look at the whole picture of the end result very well. Theoretically for <i>most</i> people (since <i>most</i> people <i>don't</i> tune their system well) performance gain from freeing up an entire proc for running the main thread from all of the other running tasks (like software firewalls and antivirus products) will actually counteract the performance drop from running at a slightly lower clockspeed.

And this is even more true of 'hardcore' gamers with all of the little added extra bells and whistles like voice chat during game play, camera chat during game play, camera-based panning, etc. Why these fine dedicated folks will see a <i>significant</i> improvement with the lower-clocked DC over the higher-clocked SC.

3) It's still not even set in stone how expensive a dualcore CPU will be compared to the highest speed singlecore CPU. Likely after six months to a year of desktop dualcore this price argument will have completely gone anyway. Maybe even in less time than that.

And, again, price is a completely different subject <i>and</i> IMHO moot. Anyone looking at the fastest SC or the fastest DC <i>isn't</i> concerned about price, period. So even if it wasn't a different subject, it's still a meaningless argument for this situation.

So in conclusion, tell me again, how is it <i>not</i> contradictory to be anti-DC and pro-SLI?

<pre> :eek: <font color=purple>I express to you a hex value 84 with my ten binary 'digits'. :eek: </font color=purple></pre><p>@ 185K -> 200,000 miles or bust!
 

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
Are you one of those people that claimed a 1.3 GHz Willamette + SDRAM didn't suck, because price is a moot point, its overall performance was still respectable, you shouldn't just compare to a Pentium 3 because no one said you should, prices would soon drop, and clock speed of P4 would increase ?

For gaming, Smithfield is to Prescot (and A64) what a 1.3 GHz SDRAM willamette was to a 1 GHz P3: ~10-20% slower, and ~50-100% more expensive.. therefore: crap.

>> If I add a second GPU, the first one won't become any
>> slower.

>That's entirely debatable in and of itself.

Not really.. as you pointed out yourself:

>In theory it won't be so long as you force the graphics to
>run as a single card system, but there's already a halving
>of the bandwidth

there is no halving if you don't use the second card. Bandwith halving per core (or per GPU) just results in sublinear scaling, but any scaling is good as long as its positive scaling, not negative scaling like for Pentium D.

>> If Intel or AMD add a second core to the package, the
>>entire chip, both cores have to be clocked slower to stay
>>within workable power limits.

>This is also debatable.

Indeed this one is. And that is the core issue. I'm not bitching about the theory of DC, but about reality. For intel, for now reality still is dual core is significantly slower than single core on single threaded code. If/when they work out the thermal problems to solve that, I'll stop complaining. Just like people stopped complaining when P4 reached 2 Ghz, got DDR and moved to 130nm.

>1) The undiluted performance loss from a slightly lower
>clock speed is still only a few percentage at absolute
>worst. Oh darn. <sarcasm>Three less FPS. God, that's so
>crap.</sarcasm>

Another way of looking at it, is that a 50% cheaper and cooler cpu will give you *better* performance. That makes the expensive one crap in my book (for ST code). Regardless if you are comparing SC with DC or A64 with P4. As it is, P4 is also a crap choice for gaming. Not because its absolute performance is so terrible, but because the same performance is achieved with a significantly cheaper (and cooler) A64. 10 FPS may not be much, but a couple of $100 is significant when choosing one CPU over the other.

>2) For someone who says "The end result matters" you don't
>really look at the whole picture of the end result very
>well. Theoretically for most people (since most people don't
> tune their system well) performance gain from freeing up an
> entire proc for running the main thread from all of the
>other running tasks (like software firewalls and antivirus
>products) will actually counteract the performance drop from
> running at a slightly lower clockspeed.

Horseshit. Back it up! Show me running an iddle AV, firewall, IM etc comes anywhere *near* a 15-20% performance drop while playing a game and that this performance drop would be "counteracted" by a dual core cpu. Any drop is much more likely due to running out of RAM, or harddisk access and it either case, second core won't do squat. I have 10+ background apps running, including firewall, AV, several IMs and what not..just ran perfmon for a while, and all combined, they average around 1.2% combined, most of which I assume was even running perfom itselve and redrawing the charts.

>3) It's still not even set in stone how expensive a dualcore
> CPU will be compared to the highest speed singlecore CPU.
>Likely after six months to a year of desktop dualcore this
>price argument will have completely gone anyway. Maybe even
>in less time than that.

Sure it is, intel Pentium D prices have been published.

Smithfield 820 (2.8GHz) - $240
Smithfield 830 (3.0GHz) - $314
Smithfield 840 (3.2GHz) - $528

Compared to (pricewatch prices):

Prescott 2.8 GHz - $149 (70% increase)
Prescott 3.0 GHz - $162 (93% increase)
Prescott 3.2 GHz - $197 (168% increase)

(note: pricewatch prices of Smithfield will likely be lower than offical prices, but P4E prices will also drop upon release of smithfield).

If single threaded performance is all you care about (like gamers) paying roughly twice as much for the same performance is a crap choice in my book.

That said, absolute prices for Smithfield are quite reasonable, and for a lot of people this chip *will* make a lot of sense, I never said anything else, but <b>not for someone only concerned about framerates</b>. Its either slower or more ecpensive or both. You seem to fail that I only say 'its crap for ST code'. It simply is. Just like A64 is a crap choice if all you care about is 3D rendering or video encoding speed.

> And, again, price is a completely different subject and
> IMHO
> moot. Anyone looking at the fastest SC or the fastest DC
> isn't concerned about price

fine, ignore it, fact still is that anyone looking for the fastest ST/gaming platform would be nuts to get a smithfield, even if it where given away for free. Compared to a single core P4EE (let alone any A64/FX) its gaming performance is absolutely crap. In fact, it won't be better than his 2 year old NW. Hows that for progress ? Smithfield will be great for a lot of things for the money, but gaming just isn't one of them, and isn't going to be for at least 2 years. Capiche now ?

= The views stated herein are my personal views, and not necessarily the views of my wife. =<P ID="edit"><FONT SIZE=-1><EM>Edited by P4man on 04/14/05 06:53 AM.</EM></FONT></P>
 

tedlogun

Distinguished
Mar 6, 2005
39
0
18,530
what is it you two are aguing about?

yes a DC p4 will be worse (no crap) at single treated apps
yes a dc2.8ghz pentium will (most likely) destroy an equivilantly priced athlon 64 in any multi threaded app

of course the price is going to increase due to the x2 in die size

and there are going to be p4e price drops <A HREF="http://theinq.net/?article=22498" target="_new">http://theinq.net/?article=22498</A>

----------------------------------------------
great minds think alike, fools seldom differ...
 

TRENDING THREADS