amd quadfather 4X4 VS intel Kentsfield

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
I'd go further and say that one can guarantee that AMD will retake the CPU crown. Not with 4x4 however and perhaps with K8L. But in due time they will retake the crown. Only to lose it once more. That's how this industry works. Intel really slipped with Prescott/PentiumD. Same way nVIDIA slipped with the GeForceFX.

The question is: Will Intel make netburst-alike mistake in the future again?
 
The question is: Will Intel make netburst-alike mistake in the future again?

One of the 2 companies is bound to make this kind of mistake in the future. Intel's problem with Netburst was that they couldn't achieve the throughput speeds needed with a decent power/heat ratio. Intel planned to scale the Netburst line to 10Ghz, but then found out that it wasn't going to happen with the architecture they had in place.

When releasing a platform line, a company has to assume that it will scale, and that the obstacles one faces now - can be overcome. When working with designs at these tiny scales, you don't realize what limitations are going to be insurmountable until it's too late. Having to redesign an architecture is a process that takes years, not weeks - which is why Intel had to stick with the Prescott for as long as it did.
 
I'd go further and say that one can guarantee that AMD will retake the CPU crown. Not with 4x4 however and perhaps with K8L. But in due time they will retake the crown. Only to lose it once more. That's how this industry works. Intel really slipped with Prescott/PentiumD. Same way nVIDIA slipped with the GeForceFX.

The question is: Will Intel make netburst-alike mistake in the future again?

They seem to have gotten the message.. and there new CEO seems to be adament in moving Intel in a different direction. In many ways Intel is looking more and more like AMD. There strategies seem to be inline.

Of course Intel does have the engineering expertise, the funds and the manufacturing prowess as well as a CEO Otellini that if all together are successful, they may be able to truly revolutionize personal computing within the next decade. If not then evolutionary products will keep popping up for the forseable future with Intel and AMD exchanging blows.

One things for sure.. Otellini seems determined in changing Intel's prior business practices.
 
Honestly, I completely expect K8L to rock the world...

What features of K8L are producing the most anticipation?

There are really two things I see in K8L that are going to boost overal improvements for the industry.

The first (and in no order of priority) is the extra decoders that will help improve IPC and keep the pipe full this is sorely needed against C2D as C2D has really excelled here.

The second is the extra nodes in the HT links, this will really open up the architecture to numerous possibilities.

There is a list of 14 or so major technologies or bullets that AMD touts, a few are follow-on's to Intel's C2D (not that AMD is copying, it is most likely they were working on the same concepts at the same time), most seem to be on bandwidth -- which puzzles me slightly because BW is not the issue with the current K8 architecture. Some are IPC related, this is where the work needs to get done.

Jack

It's yet to be seen if AMD will be able to add all the features they are planning to include with the K8L and keep the thermal output down. What they are intending to do is a pretty big overhaul to the K8 uArch, I personally hope they will deliver but sadly I do not have faith in AMD's ability to deliver on time or on target (features, thermal output, electrical draw, etc), but I guess well have to wait and see.
 
When releasing a platform line, a company has to assume that it will scale, and that the obstacles one faces now - can be overcome. When working with designs at these tiny scales, you don't realize what limitations are going to be insurmountable until it's too late. Having to redesign an architecture is a process that takes years, not weeks - which is why Intel had to stick with the Prescott for as long as it did.

Intel had potential killer desktop platform all these dumb prescott years. All they needed was to rebrand pentium m for desktop.

I believe that behind Intel failure is stupid marketing rather than engineering. They touted GHz myth, they wanted Itanium, so they had to keep Prescott.
 
Honestly, I completely expect K8L to rock the world...

What features of K8L are producing the most anticipation?

There are really two things I see in K8L that are going to boost overal improvements for the industry.

The first (and in no order of priority) is the extra decoders that will help improve IPC and keep the pipe full this is sorely needed against C2D as C2D has really excelled here.

The second is the extra nodes in the HT links, this will really open up the architecture to numerous possibilities.



Is this enough to rock the world? Seriously, from what I have seen so far, it seems unlikely to me that K8L will surpase Core2 in performance.
 
i think k8l is already in trouble
there isnt an answer to C2D and now it looks like 4x4 is destroyed by kentsfield. plus kentsfield is coming out before 4x4 and kills it. k8l is not going to even be used imo

opening the arch to numerous possibilities is not an advantage unless you can make use of the advantage. i dont see buying a graphics card company for 5 billion dollars as making the most effective use of this advantage.
how many years will it take to make use of this advantage?

Seriously, as soon as my stock broker provides options trades, I consider buying a lot of AMD put options expiring in 2008....
 
i think k8l is already in trouble
there isnt an answer to C2D and now it looks like 4x4 is destroyed by kentsfield. plus kentsfield is coming out before 4x4 and kills it. k8l is not going to even be used imo

opening the arch to numerous possibilities is not an advantage unless you can make use of the advantage. i dont see buying a graphics card company for 5 billion dollars as making the most effective use of this advantage.
how many years will it take to make use of this advantage?

Seriously, as soon as my stock broker provides options trades, I consider buying a lot of AMD put options expiring in 2008....


the stock market is funny cause of terrorists and gas prices and uniformed investors i recommend diversifying your stock

Well, I guess terrorists and rising gas prices make put options just more attractive... :)
 
i think k8l is already in trouble
there isnt an answer to C2D and now it looks like 4x4 is destroyed by kentsfield. plus kentsfield is coming out before 4x4 and kills it. k8l is not going to even be used imo

opening the arch to numerous possibilities is not an advantage unless you can make use of the advantage. i dont see buying a graphics card company for 5 billion dollars as making the most effective use of this advantage.
how many years will it take to make use of this advantage?

Seriously, as soon as my stock broker provides options trades, I consider buying a lot of AMD put options expiring in 2008....


the stock market is funny cause of terrorists and gas prices and uniformed investors i recommend diversifying your stock

Well, I guess terrorists and rising gas prices make put options just more attractive... :)

you are correct but you need to find the stocks that dont crash because of terorists and gas prices... so far intel has been crashing and hovering around 18 dollars "for those reasons" as far as i can tell

Well, I am new to stock market, but I believe that to maximize *put* options effect, you should rather *prefer* stocks that crash as much as possible and exercise them at the time of crash.

(BTW, Intel is at 19.75 now :)
 
When releasing a platform line, a company has to assume that it will scale, and that the obstacles one faces now - can be overcome. When working with designs at these tiny scales, you don't realize what limitations are going to be insurmountable until it's too late. Having to redesign an architecture is a process that takes years, not weeks - which is why Intel had to stick with the Prescott for as long as it did.

Intel had potential killer desktop platform all these dumb prescott years. All they needed was to rebrand pentium m for desktop.

I believe that behind Intel failure is stupid marketing rather than engineering. They touted GHz myth, they wanted Itanium, so they had to keep Prescott.

Again with the Itanium to desktop crap, explain to me how on earth Intel was going to move the entire x86 market over to IA-64 and how on earth we as consumers were going to afford the processor, and how on earth was Intel going to get every programmer out there to use the compiler and code better?
 
I believe that behind Intel failure is stupid marketing rather than engineering. They touted GHz myth, they wanted Itanium, so they had to keep Prescott.

Again with the Itanium to desktop crap, explain to me how on earth Intel was going to move the entire x86 market over to IA-64 and how on earth we as consumers were going to afford the processor, and how on earth was Intel going to get every programmer out there to use the compiler and code better?

Not sure whether this is the question to me... Personally, I consider Itanium a major failure (both engineering and marketing) and one of most important causes of Intel troubles (maybe even single most one).

Anyway, back then, the Intel strategy was to occupy high-end by Itanium. They simply were not interested developing x86 CPU with that would compete with Itanium.

So they developed architecture targeted for multimedia processing (which netburst really was good at) but not so much good for server tasks.

The plan however failed because in-order architectures, explicit parallel or not, cannot really compete with out-of-order ones, so Itanium was not as good server CPU as they wished.
 
Itanium has never intended for desktop. It intended for back end server, where they can serve secure financila transactions or science apps. They have very strong encryption hardware built in and superior FPU. If every processor need to bring to desktop, then ask yourself why they do not bring SPARC, and PA-RISC to desktop
 
Intel might underestimate AMD, because the K6 mishap in design and production. Intel was sleeping until they got the wakeup call 2 years ago from AMD. Guess what, they delivered the best CPU to compete AMD
 
Itanium has never intended for desktop.

Itanium was meant as very high performance chip. By releasing desktop CPU that would beat Itanium on performance basis, "very high performance" claim would be hard to defend.

(BTW, this is the comparison I would like to see - 2-core Itanium 2 vs Woodcrest in back end server tasks. I believe Woodcrest would win despite being laid on 1/3 of die size :).

It intended for back end server, where they can serve secure financila transactions or science apps. They have very strong encryption hardware built in and superior FPU.

You can add superior FPU and strong encryption hardware to any architecture...
 
Itanium is also scalable. For example, they built supercomputer 7000 Itanium 2 for Livermoore lab within 6 months. Bring Itanium to desktop will not be a viable option where x86 apps has been dominated. Itanium purpose to compete other backend servers: PowerPC, PA-RISc and SUN server
 
Not sure whether this is the question to me... Personally, I consider Itanium a major failure (both engineering and marketing) and one of most important causes of Intel troubles (maybe even single most one).

How is it a failure because it didn't get into the SOHO market like everyone in their dog thinks Intel wanted to do? Or maybe because its performance comes from compilers? Or the sheer size of the processor and cost to manufacture?

Seriously look at the whole picture and not the tunnel vision that tends to go with our type of thing. The processor is very much a work in progress it's performance numbers prove that it can deliver, but it requires very heavy compiler usage and smart programming to get that performance.

Those are the draw backs for the current form of the Itanium and yes that does suck in the grand scheme of things but as I said its a work in progress and while it appears to be a failure because it hasn’t gotten into the SOHO market doesn't mean its a failure, just a work in progress.

Regardless of any current thoughts on the matter Mitosis appears to be the technologies near term goal. Whether or not it will live up to the performance that Intel hopes it will attain is up in the air or more realistically in a Intel development lab getting worked on.

Anyway, back then, the Intel strategy was to occupy high-end by Itanium. They simply were not interested developing x86 CPU with that would compete with Itanium.

X86 doesn't hold a candle to IA-64 in performance so there would never be a situation where IA-64 would be competing with x86 for performance or market share.

The plan however failed because in-order architectures, explicit parallel or not, cannot really compete with out-of-order ones, so Itanium was not as good server CPU as they wished.

Implementation aside OOO processors vs. EPIC aren’t something to consider because as I stated each implementation is vastly different.

As for the "not as good server CPU" comment, last I checked when any processor runs specifically compiled code they generally run exceptionally well this does not change for the Itanium which in the current 9000 series holds it own against the Power, Sparc, Xeon, or Opteron for that matter. It's just unfortunate for Intel IA-64 requires heavy compiler usage to produce those jaw dropping numbers.
 
Itanium has never intended for desktop. It intended for back end server, where they can serve secure financila transactions or science apps. They have very strong encryption hardware built in and superior FPU. If every processor need to bring to desktop, then ask yourself why they do not bring SPARC, and PA-RISC to desktop

I am pretty sure that was the point I was trying to make.
 
(BTW, this is the comparison I would like to see - 2-core Itanium 2 vs Woodcrest in back end server tasks. I believe Woodcrest would win despite being laid on 1/3 of die size .

Itanium 2 9000 series:
core logic — 57M, or 28.5M per core
core caches — 106.5M
24 MB L3 cache — 1550M
bus logic & I/O — 6.7M

Xeon 5100 series:
core logic — 43M, or 21.5M per core
core caches —248M
L3 cache — NA
bus logic & I/O — unknown

Die size is irrelevant they add more cache to the Itanium because of its implementation. Also no if the I2 and Xeon 5100 were given optimized binaries the I2 would win in server, data set tasks. Since there has been no games, productivity, content creation software ported over it's unknown how well the I2 would fair in that type of software environment but I would bet quite a bit it would fair very well.

You can add superior FPU and strong encryption hardware to any architecture...

No you implement those features within the specifications of the IA, tied with a good design you will then get superior FPU and encryption performance.
 
Itanium is also scalable. For example, they built supercomputer 7000 Itanium 2

This is CLUSTER. You can "scale" any CPU this way.

To make that point a bit more clear the Itanium scales linearly as you add additional CPU's unlike x86 and most RISC processors. There is an exception such as the K8 for instance. It scales very well but is currently stuck at 8 way setups (correct me if I am wrong).
 
Netburst or Core are in IA32 (x86 family), Itanium is IA64 (EPIC). Itanium was slow for adaption in early 2004 and before. Since then, Itanium start gaining ground. HP CEO (Mark Hurd)commited 3B for Intel to develop next gen Itanium. Failure or success depend where you look at it. As long Itanium volume keep growing from quarter to the next quarter, I consider it is a success
 
Itanium is also scalable. For example, they built supercomputer 7000 Itanium 2 for Livermoore lab within 6 months. Bring Itanium to desktop will not be a viable option where x86 apps has been dominated. Itanium purpose to compete other backend servers: PowerPC, PA-RISc and SUN server

REalistically speaking today's Itanium processors are the world's fastest. And it's not by a small margin either. Just one of the Montecito cores running at 1.6GHz can sustain about 8GFlops/s on average peaking much higher.

Now These processos are of a Dual Core Madison design (that's what Montecito is). So 16GFlops/s per processor in a 4-way system means 64GFlops/s (but usually averages about 50GFlops/s). To compare, an IBM PowerPC 5 4-way configuration clocked at 1.9GHz barely reaches 34GFlops/s.

So realistically speaking, Itanium has no competitors.. it's pretty much the lone standing Uber HighEnd processor.
 
(BTW, this is the comparison I would like to see - 2-core Itanium 2 vs Woodcrest in back end server tasks. I believe Woodcrest would win despite being laid on 1/3 of die size .

Itanium 2 9000 series:
core logic — 57M, or 28.5M per core
core caches — 106.5M
24 MB L3 cache — 1550M
bus logic & I/O — 6.7M

Xeon 5100 series:
core logic — 43M, or 21.5M per core
core caches —248M
L3 cache — NA
bus logic & I/O — unknown

Die size is irrelevant they add more cache to the Itanium because of its implementation.

In-order design requires those large caches because any miss causes complete pipeline stall. OOO is capable of hiding some miss latencies, so the cache size is less critical.