Fuddo: AMD 45nm K10.5 scheduled to launch in 1H 2009

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.


Indeed.


We must remember AMD designed barcelona for the server/workstation market. And there, clock for clock, it does rule (they need higher clocks to compete at the high end of course).


Barcelona is great - it just doesn't port well to the desktop (unlike K8)
 


Head on correct. Right now yes Barcy does whomp Intel's server chips. But thats mainly since servers utilize memory bandwidth more. But once the Nehalem server chips hit that will change. It will be which chip has a better IPC.

But I think Intel will have a bit of advantage due to DDR3 and its higher bandwidth and speeds. This considering AM3 is slated for 2009 which should add DDR3 support.
 


The problem with the desktop world is no innovation in Memory technology. Intel gets around this by large shared L2 Caches, meaning the processor has to go to Main memory way less often. I think the best thing about Nehalem is going to be the modular design.

All the switch to DDR3 will do is give more bandwidth to both parties in the server market. The desktop market needs decent speeds with much lower latencies.

@Reynod AMD's lithography process is actually well ahead of Intels, which is what allowed them to do native quad at 65nm, Intel couldn't get it to work and considered it impossible at that size.
 


No way.

Ask anybody in the industry who has the superior litho process and you will get intel. Across the board. Hands down. They didn't do monolithic for *economic* and *profit* reasons, not because they couldn't.

See:
Tukwila -- have you seen the size of that beastie? 22x32mm? You can't get any more than 50 of *those* on a wafer. Over twice the size of Barcelona.

Take a look at the cell sizes and transistor densities for AMD and Intel at 65nm node. Intel wins. What about defect densities? Again, intel is widely suspected to win. (nobody publishes numbers for that) Based on both of these "facts", it would actully be *easier* for intel to do monolithic than AMD. Why don't they?

Intel also uses dry lithography for 45nm -- pushing off immersion to the 32nm is not only a cost savings, it's a testament to the quality of the engineering which goes into the lithography. While it's not a direct 65nm comparision, it's the same people doing the same work.



The other reason they didn't move to monolithic quad -- even if their process is capable, the design wasn't. Remember, intel was rather late in the game to the multi-core stuff (remember that pentium D?).. it takes *time* to gear up a new design. Then you have to figure out where to spend design resources....
 
Correct me if I am wrong, but I don't believe Barcelona\K10 Opty is even in the channel. I don't believe it has been since the stop ship in December.





 


Post something constructive, or don't post.


So does Barcelona not exist anymore?




"Not a fanboy" my arse :sarcastic:
 


AMD can't get 65nm working at the speeds of 90nm, while Intel appears to have mastered 45nm and is pursuing 32nm. I dunno... maybe they'll pull ahead of Intel in the future, but it doesn't look good today.
 



There is alot more to the processes than just lithography though.
 
There is alot more to the processes than just lithography though.

Agreed. Intel has the more mature process right now and in terms of bringing successful products to market that is what matters. AMD may be ahead in terms of conceptual design but are definitely lagging far behind in terms of design realization.
 


Yeap.

Look at the Nehalem arch.


Very similar to K10.5 isn't it. Even Intel admitted K10 was right, just too hard to do on 65nm.
 


Timna had an integrated RDRAM controller. Indeed, RDRAM prices were far too high to be used in the low end systems for which Timna was designed. Intel decided to use a bridge chip that would allow Timna to use SDRAM despite the RDRAM controller. As I understand it, some problems arose with this setup and the chip was eventually canceled. There was a *very limited* amount of working silicon produced, so Timna did exist as a physical product in the lab - it was just never sold to the public.
 


This is exactally what I have been saying. The IMC was a direct link to RDRAM and when they decided to go to SDRAM for cost purposes they needed the MHT(memory hub translator) in order for the IMC to work with SDRAM. But my guess is that it caused too much slowdown than having the CPU link directly to the northbridge and then through the FSB to the SDRAM.

Bute either way Timna had the MC on the CPU with the GPU. It jsust never flew.
 



Ok....

Link to a place where you can purchase a Barcelona. Keep in mind, a Barcelona is a K10 Opteron, not a Phenom.

Barcelona = K10 Opteron
Agena = K10 Phenom
 


Timna is bupkis. I doubt that Intel engineers went back to designs from 1992. I'd rather think they stared from scratch. My argument stands about designs not implemented, or implemented well. Just as the Phenom doesn't live up to it's design in performance, Timna never reached the market. Stop using it to prove that Intel came up with IMC. AMD came up with a successful, workable IMC. That's what counts.

You can't say it's not needed on the desktop for a couple of years and then say that Nehalem is the time to do it. Nehalem is not a couple of years away. I do think that Intel responds to AMD "threats". Phenom is not a threat, but the X2's were. Intel's no longer complacent because of AMD's brief 2-3 year success story. Competition matters.

You have to be careful not to fall into the trap of fanboyism. Intel is not in some mystical tower deciding when the time is right technologically to implement things for the consumer market. They respond to market forces and challenges from competitors. It's a business.

Me, I'll buy Phenom 9850 and actually stop kvetching about overclocking. I'll give it a try. Unless of course I just decide to get an 8750 on a 780G board and wait for Deneb. Still, I'm not a fanboy saying that Phenom B2 was the best and that Intel will bleed cash or whatever mishegoss Thunderman was spouting. I'll just say that it meets my needs and I like the underdog.

I used to work as a Librarian in a law firm where we had mostly business clients, now I work in a data center, but my skills are only A+ level, so I have a stable but not highly exciting job. So, I know the value of IMC on the server side. I've noted it's value on the desktop since I switched from P4 Northwoods to Athlon X2's. I expect to see it's value increase once everyone's favorite behemoth switches to IMC. Just don't say it began with Timna unless you have proof that Intel actually built upon it this time around. Ideas not completed and marketed are just fog that clears with the midday sun.

I'm still not sure I follow you with dissing a hybrid SOI with HK/MK. The old articles on SOI I read discusses the value in reducing heat and allowing for better performance. One thing that Intel might have done right if the process had been different was the Prescott pipeline. If a Phenom's pipeline is 12 staged, like the X2's, then increasing it to, say, 21 might allow for 45nm Deneb clock boosts. If the fundamental design is poor compared to Penryn and Nehalem, then it won't help AMD enough, except in the OEM and budget markets, but a hybrid tech strikes me as a way to successfully transition between fabrication technologies.

At any rate, I'd put IBM's researchers up against Intel's any day. So, I have confidence that the first Deneb's will be innovative. Whether they're innovative enough in the market remains to be seen. AMD seems to be doing okay with OEM's regarding 65nm B2's and upcoming B3 orders, so I can't see Deneb failing except when dissed by enthusiasts at boards like Tom's.

Yes, enthusiast concerns count more here, but enthusiasts who expect a dirge for AMD to be sung forget that they aren't much of a market. Nvidia needs to realize that too regarding dinosaur monolithic $599 GPU's. The money's to be made in chipsets, OEM PC's and notebooks (which are all OEM, people don't build their own notebooks). Both AMD and Intel are positioned for the notebook market the next couple of years. Nvidia is not. So, that's the company I'd sing a dirge over, not AMD.
 


Um we can't tell what Intel is doing with Nehalem. What do you think Core2 came from? It draws its roots all the way back to the Pentium Pro in some of its architecture.

My point is that the IMC does not help us end users. I admit it does in the server side but my point is that the FSB is just as good as the IMC.

I do feel that Intel should move to the IMC just so that they will have an even playing field and it will finally be up to which CPU is just all out better and provides more IPC. I just get sick of people who sit there and brag about the IMC not knowing that Timna was actually produced and released to a few. But was scrapped due to problems with the memory controler and the MHT(memory hub translator) in the 820 chipset. This was due to Intel switching to SDRAM from RDRAM.

Either way Intel is moving since after a few more years the FSB will become too slow. I think Nehalem would probably get close to saturating the FSB.
 


Not everyone knows the history of CPU design. There are areas I'm weak on and I've been building PC's since the 386SX days (wanted a new CPU, the motherboard came with :lol: ). Innovation is good. What I don't see is AMD fanboys spouting the IMC particularly. They're right that AMD had the best designs that actually performed a few years ago, but now Intel has the best designs.

The fact that Intel went back to the Pentium Pro shows they were off track. In fact, they were much further off track then AMD is today. I still think that not everyone has to know about Timna (I'd read about it but forgotten the details until you brought them up) to be proud of AMD's IMC innovations. What both sets of fans need to know is that AMD and Intel have technology sharing agreements related to the x86 license and no one brings a product to market based on the work being done only in house. That's why Intel is in a pickle with a major state university (Wisconsin? I forget) about aspects of C2D design derived from the University's patents.

The days of lone inventors tinkering in their sheds changing the world is long gone. Everyone leans on everyone else. If Timna had not had memory issues, and had been brought to market, then I'd respect your response to fanboys regarding it more. The Pentium Pro was a success and so was the Athlon. Intel and AMD innovated based upon those designs, but leave Timna out of it because I don't think Intel engineers had to go back to their archive file cabinets for the schematics of such an old design before starting work on their new IMC. Too much research happened worldwide in the interim.

Timna is only for "we thought of it first" bragging rights, which don't mean much at all.

I will admit that if I could have handled calculus 30 or so years ago, I would have gone into engineering. My retired father in law had a great career at Lockheed and he won't even talk about his classified work; even though I joked with him that projects he worked on in the Sixties would either not be classified now or would be common knowledge anyways (and it is, on the internet).

I wasn't curious about anything past 1980, as I figured that stuff was still classified. When I was a kid, I wasn't into building model racing cars, but spacecraft, combat aircraft, SF models like the Jupiter 2 from Lost in Space, and also monsters from old Universal to Japanese early Godzilla era. So, I was fishing on info about Lockheed experimental aircraft during that period (like the A12 Titanium Goose). To this day, he won't even confirm what plane's he was involved with. That's old school loyalty.

The funniest conversation we had was when I joked that he couldn't talk because of that Lockheed UFO case. He actually said he'd never heard about it, though he once had an interest in UFO's because Star Trek got him thinking about real aliens.

Here that case is; take it with a whole shaker of salt like I do, but it is interesting:

http://www.nicap.org/lockufoinc.htm

Anyways, though I love aviation history, the engineering I would have gone into would have been computer related. I was hooked on computers ever since I took my first BASIC class, but mainly because I imagined future virtual worlds while playing Advent back in 1978. That's why I lean towards a sort of Judeo-Christian Simulation argument (read Nick Bostrum for a more secular Simulation argument). If this world isn't a simulation, then it will spawn worlds that are simulations, who's inhabitants will be self aware people. If intelligence thrives for billions of years, God willing, then who knows what worlds our descendants will create?
 


I was at the conference where they made this announcement. I applaud AMD/IBM for their achievements on EUV. This is very much still in the research phase and has (little) bearing on the current state of the art. Also, to note:

1) This was some "typhoon" product. Not Barcelona.
2) This was a Metal 1 mask only. Not a full product.
3) There was no mention of any yield data.

It's a photomask, that's all. The general consensus of the people I talked to (ASML engineers -- the company who made the tool -- among them) was lukewarm, at best. I was more impressed than they were. EUV is still 4-5 years out.
 


You are so wrong about the Timna processor. It was based off the Pentium III in a 370 socket. It had the same integrated IGD as what was in the 815 but ran at the processor core speed. It had a RDRAM controller on it and it worked very well. My friend at Intel still has one and the board it runs in working. The reason Intel did not come out with the processors and motherboard was the cost of RDRAM at the time. None of the Major OEM Vendors could see a need for it at the time. Then about a year later they were clamoring for low cost motherboards that support Celeron's.

It was technology that came out too soon for the market place and nothing about how Intel could not get the IMC to work right.
 



Sorry Jimmy you have it wrong about the 820 chipset and the MTH (Memory Translator Hub) The 820 chipset had a RDRAM controller built into the MCH. To try to get the 820 into circulation using SDRAM memory they went to the MTH. The MTH was a pile of crap. The story I heard is they had New Hires do the design and they left out a lot of the bulk capacitance that is used between metal layers that improves signal quality. After lots of scrambleling they gave it up as a lost cause. Went ahead with the 850 chipset which had 2 RDRAM controllers on it for their High end chipset. This happened about the time the 845 was coming out.
 
Just for FYI: Nehalm will have a IMC so there would be no more FSB.
http://www.intel.com/pressroom/archive/releases/20070328fact.htm
# Design scalable for optimal price/performance/energy efficiency in each market segment

* New system architecture for next-generation Intel processors and platforms
* Scalable performance: 1 to 16+ threads, 1 to 8+ cores, scalable cache sizes
* Scalable and configurable system interconnects and integrated memory controllers
* High performance integrated graphics engine for client
 

Just the same, litho is probably the only part of the process where AMD is clearly ahead of Intel, or at least in a way.
Intel is using a double exposure on the masks, while AMD is using immersion lithography. Intel will have to go to immersion @ 32nm. All the same, though, I find it amazing that Intel is able to pull it off.
Intel has had the best process since day 2. They were still part of Fairchild when Si gate tech was invented.
They were the first to make comercial use of it, and have never looked back.
We all hate the P4s, but the one area where they truly helped was in process. The P4s created a need for ultra fast transistors. Intel's process made it happen. I doubt core2s would scale as well as they do without that.