Lynnfield benchmarks up

Page 9 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.


I agree with that statement.

Keep in mind that I was once labeled an "AMD Fanboy" when I accused the top-end Intel Prescott models to be "factory overclocked". I don't want to seem like we're beating up on AMD, because when Intel had an inferior arch they did the same thing, they ramped up clock speeds until the plastic started to melt.

At work we still have quite a few Dell machines with Prescotts. Because of all the heat Dell deviated greatly from the stock cooler design and put a massive aluminum tower with copper heat pipes and an extra large fan exhausting directly out of the case with it's own special CPU air channel made of green plastic.
 


Agreed and agreed.

Good night from Warren (Detroit), Michigan!
 


OK, Intel designed a product around a design envelope That happens to include turbo mode, depending on number of apps, number of threads, etc. And that envelope MUST reside within the TDP allocated for the chip. With me so far? But you, in order to be 'fair' to AMD, want testers to DISABLE this feature? Jennyh, this is NOT a handicap race. This is NOT a formula car race. This is a 'run-what-you-brung' race, and currently AMD just doesn't have the horsepower to compete across the board.

The i5 is also not the end-all-be-all that the Intel fanbois are touting right now, either. The i7850 comes closer to that mark. But then again, my core business / hobby revolves aroung graphics apps and CAD apps. I think I fired up a game a time or too this year, so my definition of what a system should be able to do may differ remarkably from many on this website.
 
Also I find that water cooling is now less popular then it was 5 years ago. It is not needed as much due to the fact that these cpus run so much cooler. I absolutely love the temps on the PII 710 machine I built . With stock cooler it would idle at 2-3 degrees C over room temp. My buddy has a 720 in a HTPC with the same results.


Water IMHO is a waste of money at this time for most rigs.
 


Immaterial, since LN2 will never be feasible in 'daily cooling', unless you move to one of the gas giants like Jupitor, Saturn, Neptune or (my favorite) Uranus 😀.

I used to belong to a local telescope observer's club, and we investigated buying an LN2 generator for use with our CCD cameras for long-term deep-sky imaging. As you may know, CCDs have a dark current noise effect depending on their temperature, so the cooler they are, the longer you can gather photons on them without the image getting washed out by thermal emissions.

IIRC a 13-liter-per-day LN2 generator, using bottled N2 gas as a source, required a 220V 20A outlet for the multistage compressor and chiller, not to mention a big Dewar to hold the LN2. And the cost, with the club' discount' on club letterhead, was $27K just for the compressor. Since we had about 20 members of which 10 could be counted on contributing, it was out of the question.

I haven't run the numbers but I would bet a dollar that a 300+ watt heat source like an oc'd CPU will use a LOT more than 13 liters of LN2 per day of use. That's why those suicide runs at 7GHz you fondly refer to, on really leaky and commercially unavailable CPUs, require 500+ liters just to complete 😀.
 


You win! LOL!

Methinks maybe jennyh, whom I bet lives somewhere in Great Britain, is up past his or her bedtime 😀.
 


Absolutely.

During testing, Intel does measure the highest temp a CPU could conceivably get to; there's a test sequence known colloquially as the "power virus" (because it keeps generating more power, which slows the chip down, hence generating more power to run the test... etc.). We used to quote this as max power or something equally sensical, but there were two problems with this:

1) the power virus was never in any way indicative of real world conditions. (And believe me, any attempt by anyone internal to suggest that "it's something that the customer will rarely, if ever see" was shot down faster than you can say "Pentium debacle"). It was a theoretical maximum, but it was actually impossible to duplicate in a real system for various reasons.

2) since no application ever came remotely near this power (and everyone knew it), some motherboard and heatsink/fan makers tried to save cash by designing around what their best guess of the power dissipation would really be. Sometimes they guessed wrong, and since Intel's name was on the CPU we caught the blame (most consumers don't even know what a motherboard is). Something needed to be done to make the design power numbers rational, and TDP was the result.

There's actually considerable effort put forward in each processor generation to characterize (and re-characterize) what the appropriate mix of conditions is in order to closely measure the TDP. It's in Intel's interest to make the TDP as accurate as possible because without it the part's going to throttle a lot and look like crap in the field (and, of course, in benchmarks).

It's a marketable term, but it's not a marketing term. If it were subject to significant tinkering from folks who just want to make a sale, you'd see a bunch of "35W TDP" parts and motherboards and coolers drastically undersized for the power delivery and dissipation required.
 
Why, when the i5-750 maxes out at 3.2GHz for a single core for a 95W TDP, would 4GHz on all cores be 95W? I'm clearly missing something and I want to understand where you're coming from on this.
 
Hey TC, don't feed the troll. 😉

I don't see what's this fuss about Lynnfield all over the forum. There shouldn't be much discussion about its performance and features at all because we've known all that months ago before NDA was lifted.
(Actually, the BCLK/PCIe ratio tie-in was the only thing I didn't expect)

The thing we should be discussing about Lynnfield at this stage is finding more about how the BCLK/PCIe ratio work. Anand said the divider ticks over every 33Mhz. *nostalgic thoughts about 440BX* 132Mhz PCIe shouldn't be much of a problem on most gfx cards.

The thing we didn't know about till now is price. Knowing that, expects how much AMD will react to it.
 


So you're not really giving the maximum TDP of the chip are you? You mention that you 'test the maximum heat', giving examples of what happens during that, then you give the impression that the end TDP is 'accurate as possible', when really that's not the case is it?

Marketing. And you work in intel's marketing dept for sure.
 


The poster explained as best as he is allowed to, and I for one found it a very lucid explanation. It also tracks well from what I know from some current and former Intel engineers. But you seem to let your veil of amdism cloud your vision and your judgement. How dare you denigrate a fellow poster, simply because they they tell you the truth? Because it doesn't fit in with the 'AMD is better' philosophy that you adopt? You now know far more about how Intel calculates its TDP than AMD will ever divulge about its ACP. Look up the stats on spec.org's power charts. Or is spec now in the pocket of Intel as well?
 


Phenom II bottleneck at SLI or XFire with high-end cards. So the niche you are talking about will never spend less money on a PII processor either to have the system bottlenecked by the CPU. That same niche will be angry at the x16 lanes in i5 the same way they will be angry at a PII because they want a system with the maximum speed; they matter not about money, their only matter is the speed of the system. They will never adopt a CPU that bottleneck the GPUs. So the alternative is not the Phenom BUT the Core i7 for these people. PhenomII is a good alternative for budget dual-GPU users right now, but these are a real minority indeed.

Give the facts straight, for what I've seen till now you have said anything concrete.

As for the part about Turbo that's the same old tale about Physics for Nvidia vs. ATI cards. I'm sure if AMD would have thought about Turbo first you will have said different things, isn't it? Processors must be compared out-of-the-box, this is the only fair comparision you can make, the rest is speculation by fanboys, of one part and another.
 



Wow you continue to amaze with these dumb ?s.

There is no answer to this. Each cpu will be different. Depending on the voltage needed to reach 4ghz. IT WILL DRAW OVER 95w watts. Why cant you understand this. And you cant measure the power draw for each core.

If you run a PII 4 @ 4ghz it will draw more power then its rated for.

You clearly have no idea how overclocking works at all. And I have pointed this out to you already in this thread. It just doesnt get through your thick skull. Yet can talk about 7ghz cpus, water cooling, ln2 cooling and yet be CLUELESS about power draw&overclocking.

Epic fail.
 
fail_is_strong.jpg
 


Where were you when I bought my EVGA 680i board some 2.75 years ago?? :)

I agree - It'll be a cold day in hell before I buy another nVidia chipset board. The little NB cooler on mine sounds like a blender chewing on marbles...
 


depends on voltage, a good way to calculate max TDP on the chip OC'd id
Code:
New TDP = (Original TDP)*(New Clock/Stock Clock)*(New Volt/Stock Volt)^2

so for my PII X4 810 @ 1.2v (1.325v stock)
Code:
75w = (95w)*(2600MHz/2600MHz)*(1.2v/1.325v)^2

so lets say for the i5 750 to 4GHz at 1.475v, well say 1.25v VID
Code:
198.5w = (95w)*(4000/2666)*(1.475v/1.25v)^2
 
(whips out poking stick)

Let's say I take a Phenom 940BE.

I leave the vcore at stock and simply increase the CPU multi to 17 and the IMC/NB multi to 12 ---- so that is 3.4GHz and 2400MHz NB.

I'm operating within thermal design, little impact on temps and the AMD base clock remains at 200MHz.

Have I really overclocked?

Am I running 4 cores at Turbo speed? (Well, since it's AMD I'll say I've gone SuperSonic mode :lol: or maybe Seismic - for 'earth shattering performance').

Inquiring minds want to know ....