Tolliman X3 gets a name

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Intact means, a die with four working cores. The clockspeed is hampered by AMD's 65nm process.

We don't know that. The process could be just fine. Remember Intel's 90 nm process? Everybody said it sucked because the Pentium 4 topped out at 3.80 GHz and the Pentium D 800 at 3.20 GHz. But the process wasn't the problem- it was the Prescott's architecture. The Pentium M Dothans were made on the exact same process and were very good CPUs.

I assume that you're saying that the clockspeed is hampered because the fastest chips AMD sells today are on 90 nm rather than 65 nm. There are several theories as to why this is true and none of them have to deal with the process itself. The most-commonly stated reason (and the one that is probably the biggest factor) is because the Opterons are on 90 nm. AMD thought they could get Barcelona out the door quicker than they did and thus didn't go through the trouble of making and *certifying* a larger-L2 65 nm Opteron die. Thus, the Opterons were stuck on 90 nm and AMD wanted to squeeze as much performance out of them as possible. That meant they had to keep the 90 nm line open, so why not run more Athlon 64 X2 wafers through it and get about the same mix of chips as the Opterons, which means high-clocked, 2x1 MB L2 chips. There are two 65 nm lines, one for K8s and the other for Barcelonas and Phenoms. The K8 line likely is AMD's "price war" and mobile line, tuned for maximum yields, low power, and maximum cost efficiency instead of maximum speed (as the 90 nm chips handle that.) Who knows exactly how the 10h line has been tuned, but I *highly* doubt that the process is incapable of spitting out chips that are faster than the 90 nm lines do if you gave them similar masks.


I know that Chartered is not receiving orders from AMD as of now. I'm not too familiar with the fabrication process. I'll dig a little more to answer this.

Perhaps Chartered isn't able to produce the chips or can't produce them economically enough for AMD to want them to help out. The Barcelona is a large chip at nearly 300 mm^2, and a relatively large (compared to Intel) portion of that is critical logic and not just easy-to-disable cache. AMD supposedly is second only to Intel in fab technology, and if AMD is having teething pains with getting the chip going, then I am sure Chartered won't do any better. It would be interesting to see why AMD might not be outsourcing Brisbane/Tyler production to Chartered and free up more of their own 65 nm capacity for that usage.
 
The problem with using a foundry is that you have to place orders in advance, no matter if they sell or not. And usually those orders are not small, but pretty large. Once the order is placed, it cannot be canceled easily.
Once AMD does have it's K10 line running at a level acceptable to fulfill orders, I'm sure they will hit up Chartered to run more of their other products.
 
I don't know if they are guilty of overpricing them at launch, I think it's more of a supply/demand/R&D issue. Which is why the prices tend to drop afterwards. Supply is normally low, demand high in comparison to supply, and they are trying also to recoup R&D costs. Early adopters of ALL technology normally pay for that.

Edit: also, the high end parts are normally harder to bin at first launch also. It takes more time and money to find the parts that clock higher than average. Feel free to correct my if I am wrong here.
 
this thread has gotten way off topic.

MY PROCESSOR IS BETTER THEN YOURS!!!11 AND YOU'RE WRONG AND GOING TO HELL FOR NOT PICKING MY PROCESSOR!!!


seriously, thank you baron for giving out the info about the new AMD CPU names (I hate the fact they keep changing the names like this, it's just like when intel changed their CPU numbering schemes and I just couldn't get the hang of it)

Anyway, you're all wrong for arguing this because: CYRIX > Intel and AMD. :kaola:

Later.
 
DELETED. Who freaking cares if Intel introduced the C2Ds lower than the PentiumD? Only YOU. Everyone else liked the lower price, but you. AMD didn't have to lower it's prices, but they did - BEFORE C2D was released, and with no new products being released. WHY?!? Why didn't they keep their prices up? I didn't see you complaining when the 6400+ was released at such a low cost. Why not? Why shouldn't it have been priced at $700, and not $350?

Here's a suggestion - Intel makes their C2D CPUs for $100. They price it according to demand, and to help regain market share. Well, that's freaking amazing. They don't need to price what they consider their mainstream CPU at over $1k, just cause you think they should. Is Intel losing money? Are they hurting themselves by pricing their products at the prices they believe is good for them?

Quit crying about the same thing over and over. Who's fault is it your 4400+ isn't worth $250? Intel's? Please. Next time, get a new thing to cry about.


AMD didn't drop prices until Intel released it's pricing on C2D. They did it to erect a barrier to entry. PERIOD!!! I hate that the CPU market is now undervalued. PERIOD!!
 
AMD didn't drop prices until Intel released it's pricing on C2D. They did it to erect a barrier to entry. PERIOD!!! I hate that the CPU market is now undervalued. PERIOD!!

Undervalued? To who? You?

Did you want the days of mid range $500 CPUs again? Then go ahead a pay those prices. You are the only one.

Intel is raking in billions at the prices they have for their products. So, how exactly is that bad for them? Erect a barrier for whom?

AMD did drop prices before Core 2, not after -
Ar Techina Article

DELETED
AMD dropped theirs before the Core 2 launch.
 
You make it sound like no one could afford a PC when 3800+ was $300.

.....I'm sorry, but I guess you're the only person who can afford a PC when processor prices are sky high, and still be very very happy about it.

Now go ahead then, buy a pair of FX-74s, and be happy about it. Isn't it what you've always wanted, extremely expensive processors?


EDIT:
Baron, I know its been a long dya, and you're tired. Just let it go and get some rest. You're not even make any sense at the moment, and you're embarrassing yourself.
 
.....I'm sorry, but I guess you're the only person who can afford a PC when processor prices are sky high.

I wonder what's the price/performance ratio when 3800 was $300... wouldn't be pretty at all.

And I wonder what AMD's prices at some point for some processor has anything to do with this thread whatsoever.
All companies sell the first batch of technological products for insane money, best example being the 8800 Ultra, it's as logical as predictable, why did you rbing it up again?

It's good that AMD chose better names for the CPU's, now let's hope the next revisions can squeeze a bit more performance out of barcelonas, just like with Core 2.
 
The baron has made some good points.

Thanks for the info MU ... I found that pretty interesting.

We can cut back a bit on the venom guys eh ??

The X3 is essntially a good idea ... I am sure we will get some benefit out of it.

Why get so upset because AMD's process gives us 2, 3, or 4 cores from the yield ??

I'll be upgrading my gaming rigs so I hope the new processors are value for money ... I am sure Intel will carefully place their dual and quad cores in pricing against the triple cores.

Some marketing people on both sides will be earning their money ... heh heh ... trying to position products.

Check and compare Intels best 90nm dual core product against AMD's 6400+ 90nm chip... AMD wins on power, thermals ... sounds like me AMD produces a better product than INTEL on a given process at the end of a ramp to me :)

I think AMD is having much more strife with the 65nm process.

Hope they get that right ... 2.6Ghz so far isn't great - obviously the litho work on the Brisabane core has been put aside for Barcelona / Phenom now.

That extra core on the triple could be handy ...
 
As for games, we won't see the need to switch to x64 in a short while. Most current games are built without 64bit support (BF2142 for instance). Assuming that games do need to take advantage of x64 in 2009, AMD64 is early by.... 6 years, for desktop users. As for sever applications, AMD64 indeed helped them in boosting performance without resorting to RISC structure. In a sense, AMD64 did help transform the industry.

Therefore, I was wrong to say that AMD64 was completely ahead of its time. I apologize for it.

I still do not see the need to implement AMD64 in desktop applications though.

... Well I can say something about this. OK take Supreme Commander and add Vista 32bit + large map and lots of units and watch the fireworks. Oh wait .... CRASH... lol Yeah umm yeah... In order to run Supreme Commander on Vista you either need the patch currently not being delivered by Microsofts Automatic updates or use Vista 64bit as Supreme Commander by itself used 2GB+ :) so much for 32bit OS's huh. We have already reached the barrier and I don't think people are going to be happy with 32bit in the coming year(s) when they will want 4GB+ ram and find that only 4gb-Videocard memory is all that they get in Vista or 2GB for XP.

So from my point of view Vista should have been 64bit only. 64bit is now not later.
 
Your point?

I do not know why Intel priced their Pentium EE 840 at 1030USD, but simply claiming that Intel's using aggressive pricing by basing on the fact that Core 2 Duo cost less than Pentium D is moot. Core 2 Duo features less transistors than Pentium D, so why should it costs more? (Conroe: 291mil, Presler: 376mil)

The Pentium D 840EE is not a Presler, it's a Smithfield. It's on 90 nm, not 65 nm like the Presler and Conroe are. The Smithfield has 230 million transistors and has 206 mm^2 of silicon. One thing that you may not know is that it is a monolithic die. The Smithfield consists of two Pentium 4 Prescott 1M dies that were next to each other on the wafer and are cut out of the wafer still connected. So it's a monolithic piece of silicon with two electrically-discontinuous dies on it. That was the absolute worst-case option as Intel had the disadvantages of the monolithic die (lower yields) and the disadvantages of the MCM (FSB) and none of the advantages. It's not surprising that Intel charged $1030 for the top bin as the thermal limits made binning at that speed hard, as well as the die was nearly as large as the 2x1 MB L2 Toledo X2 die, which was selling for even more than $1030 for the top bin (wasn't the X2 4800+ something like $1200 at intro?)

The Core 2 Duo costing less than the Presler Pentium D 900 is a function of marketing, not engineering. Conroe is a single 144 mm^2 die, Presler is an MCM of two Pentium 4 6x1 81 mm^2 dies. Thus, Presler should be easier to get good yields on, just as Kentsfield should be easier to get good yields on than Barcelona as the former is a two-die MCM and the latter is a monolithic die.


AMD is a year late to releasing quad core. Guess what, when AMD's busy trying to get Barcelona to work, Intel has already shipped 1 million quad cores by June. If we divide that with the market share, AMD would've at least sold 250,000 quad cores, and save their bleeding company.

AMD is a year later than Intel in releasing quad-core CPUs. Yes, I am sure that AMD would have sold quite a few if they'd had them ready, but it takes more time to prepare a new CPU than it does to bolt two existing ones together. Remember that Barcelona is not simply a quad-core K8 but a whole new architecture. That no doubt takes time to work out as well. Perhaps AMD bit off a bit more than they should have tried to chew in one bite with the new arch and a new core design. Intel at least had a "dry run" with the Yonah Core Duo putting two Pentium M cores in a new monolithic dual-core design before they did the P6/NetBurst -> Core architecture change.

SIA didn't establish the fabrication process schedule, but as AMD's main competitor move to 45nm, AMD would be in a financially disadvantage to not implement 45nm. With die size of 283mm^2, how much more dies can AMD produce if they move to 45nm? Concerning the current yield of Barcelona, wouldn't it make sense to move to 45nm?

Uh, twice as many per wafer as 65 nm -> 45 nm is approx. 1/sqrt(2), giving an area of 1/sqrt(2)^2, which is 1/2. That is why the figures of 90 nm, 65 nm, and 45 nm are chosen- the shrink yields a 50% reduction of area required by a certain transistor density.

Doug Freedman has already expressed his opinion on this, calling Barcelona a "mismatch with AMD's current 65nm".
An analyst said that AMD's quad-core "Barcelona" Opteron processor is an "architectural mismatch" with the 65-nanometer process it is being built on. "We believe the company's late Barcelona introduction and disappointing early performance are an early indication of a bad marriage of process technology and design that will be hard to fix before a move to 45-nm is required," said analyst Doug Freedman of American Technology Research, in a report cited by several news outlets Friday.
http://www.x86watch.com/articles/amd_barcelona_process_mismatch_120.html

I'm sure he won't be the last one expressing that opinion.

Barcelona will be big, but it will apparently yield well enough to be salable. Most importantly, TDPs are reasonable. If you want to talk about a terrible architectural mismatch, it is the Pentium D 800s. Intel never should have released any of them until the Presler came out on 65 nm and had non-laughable clock speeds on TDPs that didn't leave CPU sockets glowing.

Let's see... AMD and Intel both developed their x86-64, but AMD got it working first. Intel then sued AMD for not sharing the technology under their license agreement.

You must be confusing x86_64 and x86. Intel developed the x86 ISA for the 8086 in 1978. AMD at the time was contracted by Intel to second-source their processors. The lawsuit you refer to was when Intel contracted with AMD to produce the 80386 such that IBM would consider the 80386 over the Motorola MC68000 series CPUs. At the time, CPU makers were small enough that no large vendor would buy from only one- they required a second source to ensure enough supply to meet demand. Intel didn't want to give AMD the new 32-bit i386 ISA, so they tried to back out of the contract. AMD sued for breach of contract and won a sum of money as well as the famous x86 cross-licensing agreement.

x86_64 was developed by AMD with no input from Intel. Intel's implementation was reverse-engineered from AMD's specifications due to market pressure. Intel had a stance at the time x86_64 was introduced that nobody needed 64 bits on the desktop and that they should be using 64-bit Itaniums for workloads that needed 64-bit addressing. However, Intel did build the Prescott core with x86_64 capability as the Prescott core was identical to the "Nocona" Xeon, which did have x86_64 mode enabled. Intel eventually enabled x86_64 in the Prescott for the P4 5x1 series and most subsequent P4s, as well as most every chip made since then except the Core Solo and Duo.

Your point? I never discredited AMD64 for being the first instruction set to support both x86 and x64, but it's too early for its time. Please tell me how many computer users, besides servers, are using x64 programs? How many users actually see improvement in 64bit environment?

"x64" is a Microsoft-ism for "x86_64" or "AMD64." I think they would have just called it "64-bit" but they already made a "64-bit" version for the Itanium (IA64). I think it's a dumb "ism" as x64 could easily refer to the Alpha 21x64 CPUs as x86 refers to the 80x86. Also, if x64 is supposed to refer to 64-bit, how many people would then think x86 would then refer to an 86-bit CPU and not a 32-bit one? Should they call the x86 version "x32" to make a usable distinction for most people? Dumb, dumb, dumb.

And as far as who uses x86_64 programs? Millions of people. 4 GB RAM is quickly becoming a limit that people are bumping up against and you need a 64-bit OS to get around it. Just look at the number of "4 GB RAM installed, how do I see more than 3.x GB?" threads out in the Forumz and on other sites and you get the picture. Perhaps a small minority of Windows users run x86_64 and no OS X users do (although some use PPC64), but many users of other OSes can and do easily use the x86_64 version. Even if you don't have 4 GB RAM and need the extra addressing, there is a general performance increase in having the x86_64 version as opposed to the x86 version. Plus, there are some programs that simply are x86_64-only, such as Folding@Home's *nix SMP client.

I personally have a laptop and desktop and both run 64-bit OSes. The desktop does need the 64-bit OS as it has 4 GB RAM, but the laptop doesn't have 4 GB RAM but still benefits from the ~5-10% speedup in running 64-bit applications. I'm not going to sneeze at that as it's "free" performance without having to make the CPU run faster and hotter.
 


Hey, how about a new name for the direct connect architecture that links, not 2 nor 4, but 3 cores:

The Odd Couple


 

You're right. I'm sorry for being confused between Presler and Smithfield. I guess I was a little carried away with the cost and yield arguments.

I've been advocating AMD to have a better top range product, because by that, they get to set the price ladder, as opposed to responding it. Back in the old days, an X2 3800 was selling at 300USD, and its not hard to imagine the margin AMD gained from that. People were shelling out money to purchase X2s, despite significantly higher cost, when Pentium Ds were forced to go onto the price ladder in order to be competitive.

However, now the roles have changed. AMD now is being pushed to the low / value end, where margin is virtually nonexistent. With monolithic quad core like Barcelona, the cost of that is significantly higher than Intel's MCM part, with potentially lower yield, and lower performance.

The Core 2 Duo costing less than the Presler Pentium D 900 is a function of marketing, not engineering. Conroe is a single 144 mm^2 die, Presler is an MCM of two Pentium 4 6x1 81 mm^2 dies. Thus, Presler should be easier to get good yields on, just as Kentsfield should be easier to get good yields on than Barcelona as the former is a two-die MCM and the latter is a monolithic die.
I agree. I really wonder if monolithic die is a better approach than MCM.

AMD is a year later than Intel in releasing quad-core CPUs. Yes, I am sure that AMD would have sold quite a few if they'd had them ready, but it takes more time to prepare a new CPU than it does to bolt two existing ones together. Remember that Barcelona is not simply a quad-core K8 but a whole new architecture. That no doubt takes time to work out as well. Perhaps AMD bit off a bit more than they should have tried to chew in one bite with the new arch and a new core design. Intel at least had a "dry run" with the Yonah Core Duo putting two Pentium M cores in a new monolithic dual-core design before they did the P6/NetBurst -> Core architecture change.
It definitely takes a lot more time to come up with a monolithic CPU than MCM. However, as you said, it takes little to no effort to glue two cores together, and brand them as quad cores. My argument was that AMD should have done something about Intel's quad core. The lack of response from the green team led me to believe that they pretty much put the fate of the company on Barcelona. When Barcelona flopped (as of now, it might get better later), AMD is left with nothing but continuing price cuts. This is not helping them. IMO, AMD's 65nm is not mature enough to tackle monolithic approach. If AMD really yield quad cores at 30%~40%, and can't get their clockspeed up, its pretty much unacceptable.


Uh, twice as many per wafer as 65 nm -> 45 nm is approx. 1/sqrt(2), giving an area of 1/sqrt(2)^2, which is 1/2. That is why the figures of 90 nm, 65 nm, and 45 nm are chosen- the shrink yields a 50% reduction of area required by a certain transistor density.
Alright. Thank you for the insight.

Barcelona will be big, but it will apparently yield well enough to be salable. Most importantly, TDPs are reasonable. If you want to talk about a terrible architectural mismatch, it is the Pentium D 800s. Intel never should have released any of them until the Presler came out on 65 nm and had non-laughable clock speeds on TDPs that didn't leave CPU sockets glowing.
I would certainly hope AMD has good yield on Barcelona, but it doesn't seem so at the moment. The reason why Barcelona can maintain a thermal envelop despite its a monolithic die is that AMD has thickened the gates to counter gate leakage. By doing so will limit the gate leakage, but will certainly hamper the clockspeed ramp. AMD needs to develop extra stressing techniques to counter this.

As for Pentium D, I definitely agree with you. In fact, Netburst should not have lived this long, and Intel should recognize the inherit material limit when building them. Intel ignored them, and they paid the price.

You must be confusing x86_64 and x86. Intel developed the x86 ISA for the 8086 in 1978. AMD at the time was contracted by Intel to second-source their processors. The lawsuit you refer to was when Intel contracted with AMD to produce the 80386 such that IBM would consider the 80386 over the Motorola MC68000 series CPUs. At the time, CPU makers were small enough that no large vendor would buy from only one- they required a second source to ensure enough supply to meet demand. Intel didn't want to give AMD the new 32-bit i386 ISA, so they tried to back out of the contract. AMD sued for breach of contract and won a sum of money as well as the famous x86 cross-licensing agreement.

x86_64 was developed by AMD with no input from Intel. Intel's implementation was reverse-engineered from AMD's specifications due to market pressure. Intel had a stance at the time x86_64 was introduced that nobody needed 64 bits on the desktop and that they should be using 64-bit Itaniums for workloads that needed 64-bit addressing. However, Intel did build the Prescott core with x86_64 capability as the Prescott core was identical to the "Nocona" Xeon, which did have x86_64 mode enabled. Intel eventually enabled x86_64 in the Prescott for the P4 5x1 series and most subsequent P4s, as well as most every chip made since then except the Core Solo and Duo.
Thanks for the information. I'm curious though, that did Intel have a parallel research effort when they knew AMD was working on x86_64, before they dumped the project and sued AMD?

"x64" is a Microsoft-ism for "x86_64" or "AMD64." I think they would have just called it "64-bit" but they already made a "64-bit" version for the Itanium (IA64). I think it's a dumb "ism" as x64 could easily refer to the Alpha 21x64 CPUs as x86 refers to the 80x86. Also, if x64 is supposed to refer to 64-bit, how many people would then think x86 would then refer to an 86-bit CPU and not a 32-bit one? Should they call the x86 version "x32" to make a usable distinction for most people? Dumb, dumb, dumb.
I think you're referring to the naming scheme M$ and Intel developed. Personally I don't think average joe will be confused, since he only knows 64bit and 32bit. However, I see your point.

And as far as who uses x86_64 programs? Millions of people. 4 GB RAM is quickly becoming a limit that people are bumping up against and you need a 64-bit OS to get around it. Just look at the number of "4 GB RAM installed, how do I see more than 3.x GB?" threads out in the Forumz and on other sites and you get the picture. Perhaps a small minority of Windows users run x86_64 and no OS X users do (although some use PPC64), but many users of other OSes can and do easily use the x86_64 version. Even if you don't have 4 GB RAM and need the extra addressing, there is a general performance increase in having the x86_64 version as opposed to the x86 version. Plus, there are some programs that simply are x86_64-only, such as Folding@Home's *nix SMP client.
I would respectfully disagree with you on this one. People install 4Gb of RAM, but how many of them actually utilize that many stick? I do realize that it has increasingly become a concern, but for most people, 32bit is sufficient. Only people that run memory intensive applications, such as video editing, need that much RAM. Even for gamers, I believe 4Gb is a little overkill. As for AMD64, I believe its a little early for its time. It was released back in 2003, and we still have yet to see a massive X64 user base.

Someone mentioned that in SC he/she will crash to desktop if 2Gb of RAM were installed. I encountered the same problem in Vista, but everything ran fine in XP. I'm leaning more towards the inefficient coding in Vista that caused this problem. For most, 2Gb is sufficient.


It really depends on the applications a person uses. For average people, they don't need 64bit to surf online, and type email. I don't think they'll see a speedup by switching to 64bit. But for enthusiasts like us, we might see a gain, and in some cases a substantial one.
 









See with that heat even scotch tape would not have worked. Here is what they most likely used.
http://www.jcwhitney.com/autoparts/Product/tf-Browse/s-10101/Pr-p_Product.CATENTRY_ID:2000575/p-2000575/N-111+10201+600016947/c-10101
 
Yes, stainless steel tape has proven to be the best option for slapping to duals together in different sockets. Stainless steel tape is also the only option to hold a heatsink hostage on an AMD 6400+ Black edition processor.

The AMD 6400+ is a Prescott core, right?
 
You're right. I'm sorry for being confused between Presler and Smithfield. I guess I was a little carried away with the cost and yield arguments.

I've been advocating AMD to have a better top range product, because by that, they get to set the price ladder, as opposed to responding it. Back in the old days, an X2 3800 was selling at 300USD, and its not hard to imagine the margin AMD gained from that. People were shelling out money to purchase X2s, despite significantly higher cost, when Pentium Ds were forced to go onto the price ladder in order to be competitive.

Even with their lower prices, the Pentium Ds were not better performance-for-the-dollar chips vs. the X2s, possibly except the 805, which was just by its virtue of being less than 40% as expensive as the slowest X2 was. I bought my computer during this time period and paid $360 for an X2 4200+, while the Pentium D 930 was a scant $30 less expensive but far slower. The X2 3800+ was about $300, but it competed with the Pentium D 940 the closest and the D 940 was over $400. So I wouldn't say that AMD specifically was overcharging people for the performance their chips gave with the X2 3800+ as *everybody* was charging that much.

The current Pentium D generation was never price/performance competitive with the X2s when it was being sold. The closest the Pentium Ds came to being price/performance competitive was when the old 820 and the newly-crippled 820 called the 805 were brought out at low prices when the 900 series was selling for high prices. These chips were not even competitive with the lowest of the low end of the X2 line but their low price got them a lot of business.

However, now the roles have changed. AMD now is being pushed to the low / value end, where margin is virtually nonexistent. With monolithic quad core like Barcelona, the cost of that is significantly higher than Intel's MCM part, with potentially lower yield, and lower performance.

I'll agree with the higher cost and lower yield, but the performance bit is up for grabs and yet to be determined on the desktop. The server Barcelonas haven't been benched a whole lot outside of SPEC benches and the results are all over the map. Sometimes they aren't quite clock-for-clock with the Xeons (int) and sometimes they take it all (rate_fp.) We'll get a better idea when the parts ship.


I agree. I really wonder if monolithic die is a better approach than MCM.

There are advantages and disadvantages. The monolithic core allows for easier faster core-to-core communication than an MCM does, especially as the number of dies goes up. A monolithic core also makes a typical integrated memory controller easier to implement than if there are two cores. Lastly, a monolithic core allows for finer-grained power control. MCMs' advantages are pretty much limited to the fact that they are simply two existing dies placed on one substrate. They are less expensive due to higher yields and can be rolled out quicker and easier than drawing up a new dis mask. So basically a monolithic multi-core chip is technically superior but the MCM one has the cost advantage.

It definitely takes a lot more time to come up with a monolithic CPU than MCM. However, as you said, it takes little to no effort to glue two cores together, and brand them as quad cores. My argument was that AMD should have done something about Intel's quad core.

I agree. AMD had technical reasons why they would not to reasonably do this. They had a few approaches to make an MCM available to them:

1. Put two Opteron 22xx dies in a package and slave Die 1 off Die 1's IMC via HT 2.x.
2. Put two Opteron 22xx dies in a package and wire one RAM channel to each chip's IMC and then use HT 2.x for inter-die communication.
3. Redesign the Opteron 22xx die to use HT 3.0 and slave Die 1 off Die 0's IMC.
4. Redesign the Opteron 22xx die to use HT 3.0 and wire one RAM channel to each die's IMC and use HT 3.0 for inter-die communication.
5. Make a new socket to handle four DDR2 RAM channels and put two Opteron 22xx dies in it, wiring a full two channels of RAM to each IMC and using HT 2.x for inter-die communication.

Option 1 would be absolutely terrible as cores 2 and 3 on die 1 would be terribly starved for RAM bandwidth. Gigabyte had an dual-socket Opteron setup like this in the past and it was awful.

Option 2 would work okay, but now you are using NUMA, which Windows doesn't like much (look at the QuadFX vs. X2 6000+ benches.) Much of why Windows hates it is because remote RAM accesses over HT 2.x are not all that stellar. It would also pretty much require memory interleaving, so installing RAM in pairs is mandatory.

Option 3 would be better than Option 1 as HT 3.0 is much faster than HT 2.x and you are using an UMA setup like a monolithic quad or an Intel FSB setup uses. I don't know how it would compare to any other options, though. However, this would require a redesign of the core, something that AMD wants to do while also putting a new architecture in place. That takes extra time.

Option 4 is pretty much what's planned for the future 8-core MCMs AMD has planned. It will need to use NUMA, so WIndows performance won't be stellar, but HT 3.0 should mitigate that some. Not a very good current option as it requires a core redesign. It should also pretty much require memory bank interleaving.

Option 5 would be basically a full dual-Opteron-in-a-single-socket setup. Bandwidth ought to be good, but four RAM channels is 960 data pins just for RAM, so you're looking at 1500 pins or so for the socket. This isn't a good option as it requires a new socket (I can hear the groans) and uses NUMA. Also, you'd want to install RAM in sets of four, which can be a pain in the butt.

The lack of response from the green team led me to believe that they pretty much put the fate of the company on Barcelona.

I don't see AMD's fate resting any more or less on 10h than Intel's is resting on Core. AMD sells chipsets for Intel processors as well as a graphics cards for a bunch of different makers' units. So if the 10h tanks, yeah, AMD will take it on the chin, but they will still get revenue from somewhere. Intel pretty much just sells Core-based CPUs, chipsets, and motherboards that only work with Core-based CPUs. They make a few NICs and disk controllers that will work on other platforms, as well as a few external storage devices. AMD has far less of their revenue depending on the success of their CPUs than Intel does.

When Barcelona flopped (as of now, it might get better later), AMD is left with nothing but continuing price cuts. This is not helping them. IMO, AMD's 65nm is not mature enough to tackle monolithic approach. If AMD really yield quad cores at 30%~40%, and can't get their clockspeed up, its pretty much unacceptable.

We haven't seen much of Barcelona, so we can't say if it's a flop, how up to the task the 65 nm architecture is, or what their yields are. Time will tell that one.

Thanks for the information. I'm curious though, that did Intel have a parallel research effort when they knew AMD was working on x86_64, before they dumped the project and sued AMD?

I couldn't find really anything about that. AFAIK the only 64-bit Intel was working on before AMD's x86_64 came out was Itanium IA64.

I would respectfully disagree with you on this one. People install 4Gb of RAM, but how many of them actually utilize that many stick? I do realize that it has increasingly become a concern, but for most people, 32bit is sufficient.

Most people don't do much intensive and a five-year-old machine, properly cleaned up, will do just fine. I know people who just upgraded from Pentium MMX 200s and PII-350s, largely because they had hardware break on the old machines and replacements are hard to get today. It does not take a QX8650 and 8 GB RAM just to type a simple document and use a Web browser. We had graphical word processors and Web browsers pretty darn near identical in as what's new today running a dozen years ago on machines with 32 MB RAM and 486s and Pentiums. There is no reason that we should need processors that are 100 times faster and 100 times more RAM to do essentially the same tasks. <old-school joke>What, did the Emacs guys write all of the software out there today or something?</old-school joke>

Only people that run memory intensive applications, such as video editing, need that much RAM. Even for gamers, I believe 4Gb is a little overkill. As for AMD64, I believe its a little early for its time. It was released back in 2003, and we still have yet to see a massive X64 user base.

Eh, some games are getting close to exceeding the 2 GB/process limit that 32-bit OSes commonly have. You'll need a 64-bit OS to get around that and have things be stable- yes, I know you can adjust the kernel split to 3 GB/1 GB, but that sometimes isn't stable.

Someone mentioned that in SC he/she will crash to desktop if 2Gb of RAM were installed. I encountered the same problem in Vista, but everything ran fine in XP. I'm leaning more towards the inefficient coding in Vista that caused this problem. For most, 2Gb is sufficient.

2 GB is sufficient for general usage. If you have a new game that needs 2 GB of its own RAM, you'll want at least 3 GB or more. Every day more people are hitting the 32-bit memory limit and need to get around it, which means a 64-bit OS. You're correct in stating that most haven't hit it yet, but a combination of ever-more-detailed games, HD video, higher-resolution photographs, but mostly the ever-increasing software bloat (look at Vista) pushes people over it every day at an increasing rate. I'll bet most people will be on a 64-bit OS by the time the next Windows ships.
 

I agree. I guess this is one of the way for Intel to make up loss revenue from K8, as Intel priced them a lot higher than what it should've been. Just take a look at how Core 2 is priced now, and yet Intel is still making loads of money.

I'll agree with the higher cost and lower yield, but the performance bit is up for grabs and yet to be determined on the desktop. The server Barcelonas haven't been benched a whole lot outside of SPEC benches and the results are all over the map. Sometimes they aren't quite clock-for-clock with the Xeons (int) and sometimes they take it all (rate_fp.) We'll get a better idea when the parts ship.

INT has always favors Intel, while FP favors AMD. I guess it has something to do with AMD's architectural design, which was based off RISC. Intel on the other hand, although have RISC structure, but the processor is still mainly CISC.

As for rate_fp, the rate tests are a system test, as opposed to non-rate tests. In non-rate tests, the benchmark program generates one thread of code to be ran by one processor. Its a good indication of single-threaded performance.

In rate benchmarks though, the benchmark program generates multiple threads of codes to be ran by multiple cores at one time. So SPEC_rate is a more "real" benchmark for server applications. But for desktops, most programs are still single threaded, and doesn't really take advantage of multi-core.

Barcelona scores higher than Xeon in SPEC_rate benchmarks, but scores lower in non_rate benchmarks. I suspect this is due to Barcelona having very good scaling. But core for core, clock for clock performance, it looks like Core 2 still has the upper hand in IPC.

There are advantages and disadvantages. The monolithic core allows for easier faster core-to-core communication than an MCM does, especially as the number of dies goes up. A monolithic core also makes a typical integrated memory controller easier to implement than if there are two cores. Lastly, a monolithic core allows for finer-grained power control. MCMs' advantages are pretty much limited to the fact that they are simply two existing dies placed on one substrate. They are less expensive due to higher yields and can be rolled out quicker and easier than drawing up a new dis mask. So basically a monolithic multi-core chip is technically superior but the MCM one has the cost advantage.
True, having a monolithic core does vastly improve performance over MCM approach. However, from a financial, and manufacturing point of view, monolithic cores also suffer from potentially lower yield, and cost a lot more.

I'm not sure about AMD's 65nm, but I suspect it has yet to reach its maturity. To use a immature process to yield a very complex circuitry doesn't seem like a good idea to me. In Intel's case, they're yielding a relatively simple die on a more mature process. Therefore they're able to maintain a good yield, while costing less.

I agree. AMD had technical reasons why they would not to reasonably do this. They had a few approaches to make an MCM available to them:

1. Put two Opteron 22xx dies in a package and slave Die 1 off Die 1's IMC via HT 2.x.
2. Put two Opteron 22xx dies in a package and wire one RAM channel to each chip's IMC and then use HT 2.x for inter-die communication.
3. Redesign the Opteron 22xx die to use HT 3.0 and slave Die 1 off Die 0's IMC.
4. Redesign the Opteron 22xx die to use HT 3.0 and wire one RAM channel to each die's IMC and use HT 3.0 for inter-die communication.
5. Make a new socket to handle four DDR2 RAM channels and put two Opteron 22xx dies in it, wiring a full two channels of RAM to each IMC and using HT 2.x for inter-die communication.

Option 1 would be absolutely terrible as cores 2 and 3 on die 1 would be terribly starved for RAM bandwidth. Gigabyte had an dual-socket Opteron setup like this in the past and it was awful.

Option 2 would work okay, but now you are using NUMA, which Windows doesn't like much (look at the QuadFX vs. X2 6000+ benches.) Much of why Windows hates it is because remote RAM accesses over HT 2.x are not all that stellar. It would also pretty much require memory interleaving, so installing RAM in pairs is mandatory.

Option 3 would be better than Option 1 as HT 3.0 is much faster than HT 2.x and you are using an UMA setup like a monolithic quad or an Intel FSB setup uses. I don't know how it would compare to any other options, though. However, this would require a redesign of the core, something that AMD wants to do while also putting a new architecture in place. That takes extra time.

Option 4 is pretty much what's planned for the future 8-core MCMs AMD has planned. It will need to use NUMA, so WIndows performance won't be stellar, but HT 3.0 should mitigate that some. Not a very good current option as it requires a core redesign. It should also pretty much require memory bank interleaving.

Option 5 would be basically a full dual-Opteron-in-a-single-socket setup. Bandwidth ought to be good, but four RAM channels is 960 data pins just for RAM, so you're looking at 1500 pins or so for the socket. This isn't a good option as it requires a new socket (I can hear the groans) and uses NUMA. Also, you'd want to install RAM in sets of four, which can be a pain in the butt.
I guess due to IMC, MCM approach is very difficult for them. I agree that Option 4 might look more promising than other options, but it is still very hard. It is also not possible to implement two HT for one socket like Xeon, because you need an extra chipset to sync them, if its possible in the first place.

I don't see AMD's fate resting any more or less on 10h than Intel's is resting on Core. AMD sells chipsets for Intel processors as well as a graphics cards for a bunch of different makers' units. So if the 10h tanks, yeah, AMD will take it on the chin, but they will still get revenue from somewhere. Intel pretty much just sells Core-based CPUs, chipsets, and motherboards that only work with Core-based CPUs. They make a few NICs and disk controllers that will work on other platforms, as well as a few external storage devices. AMD has far less of their revenue depending on the success of their CPUs than Intel does.
I agree, but at the moment, it looks like AMD's putting significant amount of resources trying to get Barcelona out and perform. It looks like they're also developing Fusion, but it won't be out in a short while. With the rate they're bleeding cash, the success of Barcelona is vital to them. If Barcelona doesn't perform, I'm afraid that AMD won't survive until the next big thing... which is... RV670?

As for motherboards, I'm not sure if they are actually recovering some lost revenue from there. Most boards sells are relatively cheaper price, and I'm not sure if the demands are good. I'll snoop around for it.


We haven't seen much of Barcelona, so we can't say if it's a flop, how up to the task the 65 nm architecture is, or what their yields are. Time will tell that one.
I consider it is a flop at the moment due to low availability, lack of clockspeed, and relatively lower performance. Of course, hopefully as time goes on, Barcelona can be more successful.

I couldn't find really anything about that. AFAIK the only 64-bit Intel was working on before AMD's x86_64 came out was Itanium IA64.
I think Intel also worked on their own x86_64 in parallel of AMD's AMD64. However, when they see that their own x86_64 failed, and AMD64 was almost due, they sued AMD for the codes for reverse engineering.

Most people don't do much intensive and a five-year-old machine, properly cleaned up, will do just fine. I know people who just upgraded from Pentium MMX 200s and PII-350s, largely because they had hardware break on the old machines and replacements are hard to get today. It does not take a QX8650 and 8 GB RAM just to type a simple document and use a Web browser. We had graphical word processors and Web browsers pretty darn near identical in as what's new today running a dozen years ago on machines with 32 MB RAM and 486s and Pentiums. There is no reason that we should need processors that are 100 times faster and 100 times more RAM to do essentially the same tasks. <old-school joke>What, did the Emacs guys write all of the software out there today or something?</old-school joke>
I agree. Therefore, transferring to 64bit won't likely to benefit them much.

I still remember when our beloved Gates said we only need 640k memory. And about 10 years later, 1Gb of RAM is considered low... :kaola:

Eh, some games are getting close to exceeding the 2 GB/process limit that 32-bit OSes commonly have. You'll need a 64-bit OS to get around that and have things be stable- yes, I know you can adjust the kernel split to 3 GB/1 GB, but that sometimes isn't stable.
So far I haven't seen games that need more than 2Gb of RAM. Most gamers run on 2Gb RAM alone, and the games also perform smoothly. I guess maybe 64bit is needed to play more realistic games, but I don't think it is desperately needed now.

2 GB is sufficient for general usage. If you have a new game that needs 2 GB of its own RAM, you'll want at least 3 GB or more. Every day more people are hitting the 32-bit memory limit and need to get around it, which means a 64-bit OS. You're correct in stating that most haven't hit it yet, but a combination of ever-more-detailed games, HD video, higher-resolution photographs, but mostly the ever-increasing software bloat (look at Vista) pushes people over it every day at an increasing rate. I'll bet most people will be on a 64-bit OS by the time the next Windows ships.[/quotemsg]
If Microsoft comes out with a better OS than Vista that is. :kaola:
I agree, as more and more people reaches the RAM limit, 64bit OS is needed. However, for AMD64 that was released back in 2003, its still too early for its time. 4 years later, and most people are still using 32bit. If almost everyone transfers to 64bit in 2009, AMD64 is early by 6 years.