THG (P)reviews "Core 2 Quadro" - aka Kentsfield!

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
so is it worthy to upgrade to core 2 duo now? (i'm dreaming to have an e6600 performance)

or waiting till the core 2 quadro shows up while the core 2 duo price tag decreases?

or wait for the next watt per performance or vice versa processor of the core 2 quadro shows up?

😛
 
I must admit--I am greatly surprised by these results. Everything I know about computer archetecture and software optimization would tell me this quad-core would be a killer in name only, and perhaps slightly more performance, but not enough to justify a large price tag. Boy, do I look wrong!

I still am having difficulty with these results--Sandra's results in particular are nothing short of "off the charts". I honestly do not want to jump on the wrong wagon here, but due to the wide gap in performance difference--TOO wide a gap--I am wondering if it is the benchmark programs that like the cpu, and other apps do not..?

I say this having only read most of the first page of comments, not all 6 pages. I apologize if I missed a big discussion on this, but independent from our opinions, something does appear to be fishy with the miraculous benchmark results. I'm not screaming conspiracy, nor am I saying this because I'm a blind ignorant "follow-AMD-to-my-death" type, it's just that it looks way too good to be true.
 
so is it worthy to upgrade to core 2 duo now? (i'm dreaming to have an e6600 performance)

or waiting till the core 2 quadro shows up while the core 2 duo price tag decreases?

or wait for the next watt per performance or vice versa processor of the core 2 quadro shows up?

😛

Depends on what you are using the computer for. If it is a lot of video encoding or rendering, you can consider waiting for core2 quadro.

For the rest (office, games, deveoplemnt), the difference with current software will be small.
 
with all these cores I would like to see them have the ability to power off when they are not needed...no sense really in having 4x cores sitting about doing nuthing at idle.
K8L looks to have the ability to run the clock for each core independently. It looks as if they will all share the same VCore value though. To quote from this article at Hardocp:
‘Each core will carry its own PLL or "clock-gen." This will allow the Quad-Core Opteron and Athlon 64 FX processors to scale down their power consumption very intelligently from being fully loaded.’
This is one area that Intel needs to address at some point; hopefully as soon as the first 45nm quads.

I never had a problem with Prescotts heat level, my arguement was that between Prescott and AMD you could get the same performance with a cooler chip. Personaly, I say just buy a solid aftermarket heatsink, slap it on and BAM you temps should drop into the 50's under load which is warm but nothing ridiculous.
For people who like to cool their CPU quietly and/or contribute a bit less to global warming, this was an issue with Prescott and will continue to be an issue with quad core.

Its consuming so much power, how can this be good. It will produce so much heat that it will be like a furnace in the case.
I think it might be a good idea to wait for 4x4 and then compare the two. If they are both power hungry, then i would just get a conroe.
It’s likely that they will both be power hungry, but it is easier to cool two 65W CPUs than one 130W CPU. If Kentsfield is ~125W at 2.67 GHz, then if you overclock it to 3.33 GHz it’s going to be north of 150W. I’d rather be cooling two 80W Woodcrests than one 150W Kentsfield and the same may be true for 4x4 CPUs once they are at 65nm.

The power consumption figures could have been explained more clearly:

“The test results show that the test platform at full load eats up 260 W worth of energy. By comparison, the Core 2 Extreme X6800 system consumes 165 W at full load.”

Since the Kentsfield system consumes 95W more at load, when you take into account the inefficiency of the power supply and the VRM; you are left with a difference of ~60W for the CPU. Add that to the 80W of the X6800 and you get ~140W.

As a comparison Anandtech ran a test of a Dual Woodcrest 2.66 Mac Pro which showed it consuming 225W whilst running CineBench. That puts the Kentsfield system in the shade.

I wonder whether the temperature readings are from the motherboard CPU sensor or from Intel’s Digital Thermal Sensor (DTS) which is on the newer CPUs and offers a more useful temperature reading.
 
I never had a problem with Prescotts heat level, my arguement was that between Prescott and AMD you could get the same performance with a cooler chip. Personaly, I say just buy a solid aftermarket heatsink, slap it on and BAM you temps should drop into the 50's under load which is warm but nothing ridiculous.
For people who like to cool their CPU quietly and/or contribute a bit less to global warming, this was an issue with Prescott and will continue to be an issue with quad core.

Well my big typhoon runs at 1300rpms and is pretty damn quiet, the only real noise is from 120mm fans that are there for OC'ing purposes. If you stuck a big typhoon on a Prescott/Kentsfield I have no doubt it would handle the heat without a hiccup. Hell, speaking at all drowns out the big typhoon so its not like I have a Vornado attached to cool it. I realise they produce alot of heat, but its definately manageable but it wasn't worth it performance wise for prescott, it seems to be worth it for Kentsfield. Diffrent people have diffrent needs, if they want to be the fastest they should expect to need more elaborate cooling. If they want a silent PC then they shouldn't expect to have the best proc.
 
I never had a problem with Prescotts heat level, my arguement was that between Prescott and AMD you could get the same performance with a cooler chip.

Just FYI--the only prescotts affected were the initial D0 revision stepping versions in 2.8, 3, 3.2, and 3.4GHz flavors. Reason? Buggey firmware, thermal sensors were kinda screwey on top of that, and anything over 3.2GHz required a better heatsink that was actually sufficient to cool the thing. 95% of the complaints about it, however, were due to a defect in Intel's heat pipes for their initial Prescott heatsinks.

Unfortunately, I cannot find a URL to show this anymore, and likely that's due to the "hush-hush" type of issue.
 
SSE was roughly equivalent to 3dNow! in performance.
3D Now! is not equivalent to SSE. It is much more closer to MMX than it is to SSE.
Sorry, but you're wrong.
MMX is a SIMD instruction set for processing vector integer code, 3dNow! instead was aimed at FP vector code.
SSE was Intel's response to 3dNow!, back then AMD wasn't really able to push its technology to become an industry standard, like instead it happened recently with AMD64.
Back in those days, i wrote a technical article (for an Italian tech website) on a comparison of 3dNow!, SSE and Altivec (the streaming SIMD set of PowerPC), which was later referenced by Jon - Hannibal - Stokes of Ars Technica.
 
Interesting. I did not know that, but I barely even knew what THG was when prescott was released :? . This is why we have forums to clear this crap up.

Prescott was crap though. It performed like crap (due to the added stages of it's pipeline) and it brought no new and exciting technologies to the table.

At first, when I bought an AMD Athlon64.. I was dissapointed. It lacked Multi-tasking performance.. something I had grown accustomed too with the Northwood C processors. It felt like a downgrade. but my games ran faster.

It wasn't until the X2 came out that I was fully into the K8. I loved the efficient design of the K8 as well as the added Multitasking performance the X2 brought to the table.

I'm looking forward to Quad Core.
 
Haha, you made the same move I did, P4C -> A64. However, I did notice an increase in performance mainly due to OC + RAM. On the P4C I had some really really bad value ram, and the P4C800 deluxe mobo, so as a result OC'ing was very very limited with out droop modding my mobo (I suck at soldering lol).

I thought about making the move to the X2 but there was something not right about it... something akin to a guy feeling knowing somethign better was around the bend and thats when C2D started popping up and really peaking my interest.

Now C2Q is popping up right around the time I might be able to afford a C2D, so now its crunch time, time to choose. C2Q looks really impressive and I honestly don't mind the TDP at all, I am going peltier on it anyways. I really think Intel has done a good job redesigning their product line and basing thier consumer level procs on "the core". Core Duo made a splash in the laptop segment and posted really nice numbers and boasted some new features that prior versions lacked, and now with Merom out, I am content to say Intel has an A on my paper (its written in pencil don't worry :wink:).

*Raises a glass to the preponderance of cores*
 
Haha, you made the same move I did, P4C -> A64. However, I did notice an increase in performance mainly due to OC + RAM. On the P4C I had some really really bad value ram, and the P4C800 deluxe mobo, so as a result OC'ing was very very limited with out droop modding my mobo (I suck at soldering lol).

I thought about making the move to the X2 but there was something not right about it... something akin to a guy feeling knowing somethign better was around the bend and thats when C2D started popping up and really peaking my interest.

Now C2Q is popping up right around the time I might be able to afford a C2D, so now its crunch time, time to choose. C2Q looks really impressive and I honestly don't mind the TDP at all, I am going peltier on it anyways. I really think Intel has done a good job redesigning their product line and basing thier consumer level procs on "the core". Core Duo made a splash in the laptop segment and posted really nice numbers and boasted some new features that prior versions lacked, and now with Merom out, I am content to say Intel has an A on my paper (its written in pencil don't worry :wink:).

*Raises a glass to the preponderance of cores*

I was lucky enough to own an Abit IC7 MAX3 that worked as it should. The Front Side Bus I was able to achieve with this rig was amazing. At first I had a P4C 3.0GHz clocked at 4GHz and after wards I upped to a P4C EE 3.2GHz and had it clocked at 4.16GHz. Both ran flawlessly and I never succombed to the Northwood Sudden Death Syndrom.

I then had an AMD Athlon64 3800+ which I oc'd to 2.8GHz. Wasn't a bad OC and I DID notice a performance improvment in my games.. but multitasking lagged. That's when I went Athlon64 X2 4800+ and was happy to hit 3.2GHz.

I've been supercooling CPU's for a while now. The worst overclocker I ever had was an Asus A8R-MVP motherboard. Total and utter crap.. despite whatever the reviews may tell you.

For AMD I stick to DFI now and for Intel I'm keen on Asus (though I've had great luck with Abit, but they've pretty much exited the high end overclocking market).

PS.. let me be the first to say there is a difference and it is noticeable in the performance between a Core 2 and an X2 system. Even when I had my Core 2 Extreme clocked at 3.6GHz I was still noticing some performance differences under Battlefield 2. Levels loaded MUCH quicker. No more freeze on the Geometries after changing video settings. Also when I'd exit the game the system was instantly responsive.. unlike the small while it would lag with the X2 as it was dumping the ram and caches.

It's noticeable... and these are things benchmarks won't show you.
 
I was lucky enough to own an Abit IC7 MAX3 that worked as it should.

Lucky's not the word.... my MAX3 still gives me problems :evil:

Nice article, and good follow-up posts.

I know some peeps had problems.. I never did.. and I tortured my board like you wouldn't believe. lol
 
For all of you guys who are wondering if the programmers (especialy game programmers) are lazy, stupid, ignorant etc ... because they don't seem to keep up with the core explosion outhere here some thoughts :

Nowadays to make profit from a multimillion dolar triple A game you have to sell at least 100.000 to 300.000 copies depending on the budget. You also have to have the latest in graphics, gameplay, physics or at least as much as posible! The problem is that to make the game sell you don't need to optimize it for high end you need to optimize it for the low end. Look at wow with it's six million players, and belive me that the programmers busted their asses making that game run well on semprons and cheap P4/D's not on high end FXes or Conroes.

As a developer what's the point of strugling to make a game take advantage of 4 cores, fighting your way wihout the proper support of compilers tools and APIs(DX10 is going to be the firts win graphics api multhitheaded at the core) and gaining ZERO market share from it. On the other side optimizing a game so much that instead of having 30fps on an 3400+ it achieves this easily with let's say an Sempron 2200+ (verry posible with extensive optimisations) more than doubles your potential market share!!!

Developers do strugle for performance and when they can optimize something for 5% of the hardware they will gladly do it, as 5% of the income can justify the costs of the optimisations but let's be serious why on eath would you optimize for a proc that will sell in the houndreds or maybe thousants?


So you freaks that faint when you see that you frame rates drop below 120 and rush to the nearest waste a couple of thousand dolars just to get 10 more fps, jush chill.

Nowadays the hardware isn't anvancing away from sofware it just taking a 90 degree turn.

The muthitheading thing is like a double edged knife, making a aplication inherently multithreded makes it run slower on single core (old fashion) processors, wich to this date still are the vast majority.
 
I think I might have you trumped. I used to have an Abit IC7 (not the max I think) and that was my first delve into water cooling. I did a pretty good job setting it up. I bled it, tested it for 4 hours with nothing on, then turned it on and it worked really well. A week or so later one of the metal rings that help the tubing on the pump came off and water went gushing through my system. I managed to find the PSU kill switch quickly but in the mean time it fried the mobo but some how missed the CPU. Don't ask how, because I don't know.

How's that for torturing a mobo? lol.

In any case, I am glad decent people have stuck to this thread, no real flaming going on. Keep up the good work :)
 
I was lucky enough to own an Abit IC7 MAX3 that worked as it should.

Lucky's not the word.... my MAX3 still gives me problems :evil:

Nice article, and good follow-up posts.

I know some peeps had problems.. I never did.. and I tortured my board like you wouldn't believe. lol

Every month or so I have to move my DIMMS to another slot and flash the BIOS or the computer won't boot (no beeps either :!: :?: ). Just recently it decided to go and turn itself off randomly (I know it's not windows blue-screening: sometimes it does so during POST, and DR watson is of no help). I think it's going senile...

Overclocking it just pisses it off more. Its favorite thing to do is "No-post, no-beep." I spend the next 30 minutes unhooking random peripherals and turning the computer on until it POSTS. Once I get it to POST, I plug everything back in fine....

The on-board LAN doesn't work, either. Drivers were horrible and support was worse. I RMA'd the thing 3 times before I got what I have now. After 3 RMA's I figured it wasn't getting any better, so I just kept the board, nutured it, and cursed behind its back (like a retarded sibling).

When I get my new C2D I'm going to take my MAX3 out back and (happily) put a couple of slugs through it 😀
 
K8L better be bloody fast at a good price and they better hurry up, things aint lookin good


DELL just bought 2 million + Athlon
COVERTING 1/3 RD of its line to AMD

Intel is cutting jobs like by the 10's of thousands....

tell me....what kind of availability do you expect KENSFIELD to show...

meanwhile dell's been buying Intel for how long now?
job cuts? what company doesnt from time to time - there reforming

I AINT ANTI AMD OR INTEL FANBOY BUT THIS IS BS
 
For all of you guys who are wondering if the programmers (especialy game programmers) are lazy, stupid, ignorant etc ... because they don't seem to keep up with the core explosion outhere here some thoughts :

Also add to the mix that many application simply could not be multithreaded and that multithreaded code itself has some associated costs.

E.g. while reference counts, one important programming technique in C++, can use fast simple code in single-threaded apps for increments/decrements, in multithreaded code such increment/decrement takes 10-20x more time because it must be atomic.

Another serious issue is that the heap has to be locked.
 
For all of you guys who are wondering if the programmers (especialy game programmers) are lazy, stupid, ignorant etc ... because they don't seem to keep up with the core explosion outhere here some thoughts :

Also add to the mix that many application simply could not be multithreaded and that multithreaded code itself has some associated costs.

E.g. while reference counts, one important programming technique in C++, can use fast simple code in single-threaded apps for increments/decrements, in multithreaded code such increment/decrement takes 10-20x more time because it must be atomic.

Another serious issue is that the heap has to be locked.+

You're right on the money there. A program needs to be developed from the beginning as multithreaded, it's not an option one can put in in a patch, as I'm sure you know.

I don't think the average gamer looks at this from a programming aspect. I'm sure multithreads are useful in physics and maybe the background and ambient music/sound, but games are inherently linear. You can't have thread1 stop while waiting for results from thread2 which is waiting for a response from the user. One can mash the fire button as fast as he can, but to the computer, it is an eternity between button presses.


ADA.Net anyone? 8O
 
First off, I don't see anything wrong with the C2Q if they get speedstep to work, etc., but the one thing that I didn't see in any of these thread responses is that 4x4 is more than just a dual-socket platform. From what I have read, AMD was looking to 4x4 to do more than just add a socket for a second x2 CPU. They were planning on opening up access to the socket to other vendors. So, in theory, in a few years, you should be able to drop a quad-core AMD cpu into one socket and a monster GPU into the other.

This isn't to say that AMD is hot on the heels of Intel, but it seems that Intel is playing the game and playing it well. They're driving up the core count of a CPU by cramming two C2D's into a single package. It allows them to get to market with a "quad core" CPU faster than AMD, effectively making AMD look like they're left in the dust. It's a good move with AMD just starting 65nm production. AMD will have scaling issues until they catch up with Intel's manufacturing process to be sure. Keep this in mind, though:

Say, for example, that Intel can produce 100 C2D cores on a single wafer. That means that each wafer would be able to produce 50 C2Q CPUs. If AMD can produce maybe 80 X2 cores on a single wafer and they then migrate to a true quad-core, they'll probably drop to ~60 X4 cores on the same wafer. At the end of the day, you've got AMD making 60 marketable chips to Intel's 50...all on the same amount of silicon. That'll have an effect on each company's bottom line...especially if price cuts come again.

Of course, we'll all have to wait to see where this goes, but I don't lose any respect for AMD when I hear that Intel is resorting to packing two existing cores into a package to stay "firmly" ahead and that AMD is looking to a highly innovative solution of drop-in socket components. I am starting to think that AMD is letting Intel try to stay ahead without significant innovation while they're off in the wood-shed looking for something completely different to re-define the marketplace.

Sometimes being the market leader is a bad thing...you get stuck improving on existing profitable designs instead of looking for truly innovative solutions to next-generation issues. It just happened to AMD (or did it...maybe Intel's production maturity WRT 65nm litho is the biggest reason for our current round of leap-frog). The leader gets complacent with scalability and stops driving innovation. Now AMD knows that they're behind the 8-ball scalability-wise until they migrate fully to 65nm and beyond so they're looking for a true next-generation solution. Once they catch up on manufacturing processes, they could easily take the crown with their innovation improvements.

Of course, I'm sure I'll get flamed for this, but I'm truly shocked that folks reamed Intel for gluing two P4's together for the Pentium D series and then they laud them for effectively doing the same thing with C2D to get to C2Q. If a glue-gun was an effective substitute for craftsmanship, I'd have built my own house from the ground up by now.
 
First off, I don't see anything wrong with the C2Q if they get speedstep to work, etc., but the one thing that I didn't see in any of these thread responses is that 4x4 is more than just a dual-socket platform. From what I have read, AMD was looking to 4x4 to do more than just add a socket for a second x2 CPU. They were planning on opening up access to the socket to other vendors. So, in theory, in a few years, you should be able to drop a quad-core AMD cpu into one socket and a monster GPU into the other.
I don't think we are there yet technologically. A "monster" GPU will require faster RAM than the motherboard is going to support. As far as other chips, such as a Physics chip, the 4x4 isn't intended to have enough market share (AMD is marketing the 4x4 as an enthusiast, not mainstream platform) for a 3rd party to develop a custom chip for it. Intel is planning their own Hypertransport Technology for 2008. Maybe within a couple years, a standard for HT can be reached - then we'll see some great offerings from 3rd parties. Until then, I think everything is going to remain on cards so it will work on every platform.

This isn't to say that AMD is hot on the heels of Intel, but it seems that Intel is playing the game and playing it well. They're driving up the core count of a CPU by cramming two C2D's into a single package. It allows them to get to market with a "quad core" CPU faster than AMD, effectively making AMD look like they're left in the dust. It's a good move with AMD just starting 65nm production. AMD will have scaling issues until they catch up with Intel's manufacturing process to be sure. Keep this in mind, though:

Say, for example, that Intel can produce 100 C2D cores on a single wafer. That means that each wafer would be able to produce 50 C2Q CPUs. If AMD can produce maybe 80 X2 cores on a single wafer and they then migrate to a true quad-core, they'll probably drop to ~60 X4 cores on the same wafer. At the end of the day, you've got AMD making 60 marketable chips to Intel's 50...all on the same amount of silicon. That'll have an effect on each company's bottom line...especially if price cuts come again.

Of course, we'll all have to wait to see where this goes, but I don't lose any respect for AMD when I hear that Intel is resorting to packing two existing cores into a package to stay "firmly" ahead and that AMD is looking to a highly innovative solution of drop-in socket components.
As far as "gluing" two dualcore chips together goes, it is a more cost effective solution. Yields are nowhere near 100%, so Intel can join 2 good ones to make a quadcore, with AMD's "true quadcore" - if a single core on the chip doesn't pass quality standards, then they'll have to turn-off another core and market it as a dual core chip (unless they come out with a tri-core line).
The big issue with Intel's solution was whether it was going to bottleneck with the fsb. Benchmarks say that isn't the case, so I tend to think Intel's quadcore may be the more economic solution.


I am starting to think that AMD is letting Intel try to stay ahead without significant innovation while they're off in the wood-shed looking for something completely different to re-define the marketplace.

Sometimes being the market leader is a bad thing...you get stuck improving on existing profitable designs instead of looking for truly innovative solutions to next-generation issues. It just happened to AMD (or did it...maybe Intel's production maturity WRT 65nm litho is the biggest reason for our current round of leap-frog). The leader gets complacent with scalability and stops driving innovation. Now AMD knows that they're behind the 8-ball scalability-wise until they migrate fully to 65nm and beyond so they're looking for a true next-generation solution. Once they catch up on manufacturing processes, they could easily take the crown with their innovation improvements.
We'll see how AMD comes back when K8L is released. From what I've seen (granted, just some PowerPoint slides), K8L looks competitive to Clovertown. Granted, by the time it's released, Clovertown might be scaled up to surpass K8L, but AMD might beat it on price. Gotta play the wait & see game here, since there is nothing tangible to base any firm opinions on, but AMD makes good products, and I'm sure their next platform will be a worthy one.

Of course, I'm sure I'll get flamed for this, but I'm truly shocked that folks reamed Intel for gluing two P4's together for the Pentium D series and then they laud them for effectively doing the same thing with C2D to get to C2Q. If a glue-gun was an effective substitute for craftsmanship, I'd have built my own house from the ground up by now.

The basic problem everyone had with Intel gluing 2 P4s together, was that it was nowhere near AMD solution in price, performance or power use. People blamed this on the "gluing" of the chips and the shared bus, but the problem was found to be that netburst sucked - not the "gluing". I think the C2D and Kentsfield prove this. Granted, the lack of an IMC will be a problem in future platforms, but Intel is incorporating one in their nest platform upgrade ~2008.
 
Of course, I'm sure I'll get flamed for this, but I'm truly shocked that folks reamed Intel for gluing two P4's together for the Pentium D series and then they laud them for effectively doing the same thing with C2D to get to C2Q.
It’s the difference between gluing two turds together and gluing two chocolate muffins together. The first you put in the bin, the second in your mouth.
 
Of course, I'm sure I'll get flamed for this, but I'm truly shocked that folks reamed Intel for gluing two P4's together for the Pentium D series and then they laud them for effectively doing the same thing with C2D to get to C2Q.
It’s the difference between gluing two turds together and gluing two chocolate muffins together. The first you put in the bin, the second in your mouth.

I'm never coming to your house for dinner.