AMD "Zembezi

Page 9 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

illfindu

Distinguished
Nov 30, 2009
370
0
18,810
Hey im looking towards the future and I'm eyeing the AMD 8 core bulldozer CPU'S coming down the line. I'v seen some source say there going to use a AM3+ Socket and im wondering if that means youll be able to toss one in a current AM3+ compatible board?
Will my current http://www.newegg.com/Product/Product.aspx?Item=N82E16813130297&nm_mc=OTC-Froogle&cm_mmc=OTC-Froogle-_-Motherboards+-+AMD-_-MSI-_-13130297 msi 870A Fuzion work i noticed just now that its a AM3 not a AM3+ I'm guessing that means ill need a new mother board cause the sockets are comparable?
 
Solution


That "market" is no different from any other market. Not all software makes the best use of 24 cores - some of the tests in that link didn't even make use of 12 cores. How is a lower clocked 24 core server supposed to perform against a higher clocked 12 core server when the workloads are only optimised for 12 cores?

22156.png


The result of this scaling is that for once, you can notice which CPUs have real cores vs. ones that have virtual (Hyper...
@jimmysmitty - you seem to understand this architecture well, based on your opinion, do you think I should grab that 980x deal? It will cost me about $400 - $500 roughly.

Right now, I'm not sure if BD is worth waiting for, and if it will outperform the 980x.
 


Problem is that that info will probably be the same as what we have right now. Info on the arch, but possibly some specs as to stock clock speed and turbo capabilities. Maybe some synthetic numbers but nothing for the real world.

It will either answer a few questions and then confuse us with even more questions or spur more "preferers" BS and hype.



In my opinion, grabbing a 980X for $400-500 is a pretty good deal. The LGA1366 platform is still a viable system, even though I think LGA1155 is better for overall CPU performance.

Comparing the two though, the LGA1366 platform has the advantage of true dual PCIe 2.0 lanes @ 16x, ability to do SLI or CFX no matter which mobo you choose, triple channle DDR3 up to 1600, ability to do quad CFX or Tri SLI and QPI which is extremley fast compared to DMI.

While LGA 1155 has only 2 PCIe lanes @ 8x, two channles of DDR3 up to 2133, ability to do only SLI/CFX if you choose the right mobo (some don't have the SLI chip, most decent Asus/Gigabyte ones do). But CPU wise, LGA1155 will allow for a upgrade to Ivy Bridge which already the 22nm from Intel looks very promising in terms of clock speed.

Then there is LGA2011 which will be LGA1366s successor (although I still hear in the pipeline that LGA1356 is coming) and that will feature quad channel DDR3 up to 2133, 40 PCIe 3.0 lanes meaning 2 16x lanes and one 4x lane allowing for full dual CFX/SLI or quad CFX @ 8x each/ Tri SLI , 2 QPI links with up to over 51GB/s bandwdith (2x that of LGA1366) and as well will support Ivy Bridge.

As for BD, I have a hard time saying. The architecture looks very interesting. But the problem I see that is under certain loads (such as high FP loads) it will probably lag behind even a quad core LGA1366 CPU since it has to share the PMAC units. But in cases where it wont have to, it will perform very well and depending on the application, probably be able to keep up with or beat Nehalem (LGA1366). Still its to be seen. Right now Nehalem has at least a 20% clock per clock, and core per core lead over Phenom II which means that BD needs to bring at least 30% in order to truly beat Nehalem and at least 40% to compete with the 2600K. Of course thats on a core per core and clock per clock level but considering the way JF keeps talking, it wont be that way when AMD compares BD.

I would say wait or go depending on your usage. If you plan on gaming, go for it. Its a nice price and performs very well. If not mainly gaming, I would wait to see what happens. Maybe it will outperform it and lower prices for Nehalem, maybe not. I am afraid because AMD keeps a lid on it but still its better that than let it get hyped up too high.
 
Thanks for that feedback, I agree with everything you said.

My uses are for Rendering, Encoding, and Highly Threaded apps. Gaming is next to last on my list.

I will be upgrading to Ivy Bridge as well, as I don't really care for the 1155 socket all that much. Not that there's anything wrong with it, but I'd rather upgrade to the Enthusiast socket, and to a 6 - 8 core IB processor.

The 2600K is a beast, but the 980x is a monster when it comes to performance with Highly Threaded apps, etc, where i will benefit.

For gaming, the 2500K is plenty for people.

I guess I just wanted to know if BD will perform on par with the 980x, or if will be on par with say a 950 or 2600K. All the info out there is about the architecture, I wish more info about its real world performance was available.

I don't know if I can pass up that 980x deal, I can now see why my buddy wants the 990X, anyone seen this vid of it hitting 7 GHZ?
http://www.youtube.com/watch?v=b0X9LSLDM6E&feature=player_detailpage#t=197s

Thanks again Jimmy.
 


I can't see that.If it spanks the 2600k by 10% or so it will probably be priced accordingly (say at $360 USD).
Now if it were to perform 50% then on the other hand it would end up as a $1000 chip but doubtful it will do that.
 


Yea, my definition of spanking would be a minimum of 25% performance increase clock for clock, as that would convincingly beat the 980x as well. I don't see it happening though.

If it matches the 2600K, it'll likely be priced at $300 just to under cut it. We'll see how it pans out.
 


You know it would be kind of nice for AMD to be on the top on the Tom's Hardware CPU Hardware Chart or even on equal footing even if just for a few months for a change.Hasn't occurred since early 2006. This will no doubt change again once Intel gets 22nm CPU's out and then on to 14 nm.Somehow I find it would be difficult for AMD to keep up once the 22nm CPU's come out.If the Zambezi performs well (better than Sandy bridge) I might just decide to build a PC based upon that CPU.

 


I have been in CA all week, and head out to Beijing and Taipei on Saturday morning. Won't see an engineer for a while.



I don't think anyone is going to get much out of CeBIT. People are hyping it up but I am not aware that they are going to say anything.
 


Erm, could you explain that in terms of potatoes? Again?

😀
 


Good find - thanks. What I see is that BD's L3 cache will probably be clocked at a lower speed, and that the long pipes mean higher clocks, as in Intel's Netburst days.

So this would mean that BD will probably have higher L3 latency than SB, and pipeline stalls due to a bad branch prediction will be worse than SB's.

One thing about SB that I haven't seen BD competing with - SB retains the decoded instructions in a micro-op cache, so that if those same instructions are repeated within the cache limits, SB merely has to play them back rather than decoding the x86 code again. This not only speeds things up for looped instructions, but also saves a lot of power as decoding x86 instructions requires a lot of hardware - 1 complex and 3 simple decoders for SB anyway.

The most interesting part of Sandy Bridge’s front-end is the new uop cache, which promises to improve performance and power consumption. Sandy Bridge’s uop cache is conceptually a subset of the instruction cache that is cached in decoded form, rather than a trace cache or basic block cache. The uop cache shares many of the goals of the P4’s trace cache. The key aims are to improve the bandwidth from the front-end of the CPU and remove decoding from the critical path. However, a uop cache is much simpler than a trace cache and does not require a dedicated trace BTB or complicated trace building logic. In typical Intel fashion, the idea first came back as an instruction loop buffer in Merom, then a uop loop buffer in Nehalem and finally a full blown uop cache in Sandy Bridge – a consistent trend of refinement.

Source
 
^Interesting concept.

I do not understand why L3 latency has to be so large? Good for Intel for repeating rather than waste valued processing time on F&D, and caching.

Hopefully, with the 'Module" technology, AMD could shorten thinks up for the CPU as well.
 
Well here is something:

http://www.tomshardware.com/news/amd-bulldozer-athlon-opteron-global-foundries,12259.html

http://blogs.amd.com/work/2011/02/21/amd-at-isscc-bulldozer-design-solutions/

At the core of Bulldozer processors are "modules", which integrates two "tightly linked" and slim processor cores. According to AMD, the cores integrate their own L1 caches, but share high-bandwidth resources such as a floating point unit, L2 cache as well as fetch, decode and prediction units to enable "chip multi-threading (CMT).

This is supposd to be straight from AMD terefore this is what I think confuses people the most. We get told one thing then another now another. This states that BD will share multiple parts among the cores that normally are not shared and calls it CMT.

That is what I have been saying all along. Maybe I am not saying it right.

Also:

AMD said that Bulldozer will run at clock speeds of up to 3.5 GHz. The processors will be manufactured by GlobalFoundries in a 32 nm process

I think this should stop the rumors of BD starting at 3.5GHz. It looks like AMD will be taking it a bit slow and wait for GFs 32nm to really ramp up in quality and maturity. We will probably see a range of CPUs like we do with current Phenom IIs, from low 2GHz to mid 3GHz range. As for overclocking, it has not been said if it will do any better than current 45nm Phenom IIs, since it is a first gen 32nm I don't think it will do better than SB due to the lack of maturity.

Overall, its still interesting. Its different and a change. For the better, who knows. I still say take a large grain of salt until we get some benchmarks.
 

They also said that BD would run from 10-100W, right? That's not a bad clockrate for even a dual module if they kept the same clocks and lowered the power by 30W or more. If that's for a quad module on release, that would be amazing clocks in that thermal envelope.

Is AMD releasing dual, triple, and quad module in Q2?
 


10w would probably be in idle mode, probably much like their ULP CPUs they and Intel make and a sub 35w max TDP and idle at a very low TDP, normally close to the max TDP of an Atom. The TDP on a SB 2920XM (the top mobile part) is 55W, which Intel always rates their CPUs at max. That means the max TDP it will hit is 55w under maximum load which is pretty impressive also considering this is with a HD 3000 GPU and as well this CPU keeps up pretty well with some desktop parts too.

As for the module amounts, the biggest benefit to this is that its modular like Intels Terascale, or Larrabee. It will allow for AMD to put any number of modules in there from one, to 8 or whatever they can handle on AM3+. Of course I will want to see how 3 or 5 would work but it would be possible.
 
At the end of the day when we have AMD benches compared to Intels then we can see who wins this round...

Bearing in mind that AMD will have to work hard to get as impressive as the 939 processor Athlon - well just have to see.

Would be nice to see AMD do something good...

 
modern IGPs are still weaker fom what im seeing, the X-box 360's Xenos chip is based on Radeon X1900s,
should perform about the same

The Xenos chip was also codenamed R500 and was actually a step up from the X1900. I remember reading about its performance numbers being much higher than the X1900.

Then we got the HD2900 which didn't impress too much.
 
Status
Not open for further replies.