AMD Piledriver rumours ... and expert conjecture

Page 208 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 
The part of Read's conference call where he pointed to a mis-match between LLano production and chipset/mobo production just made me scratch my head - how could they not notice something as fundamental as that? There's more to the story IMHO that they're not telling us or the investors.

When you slash that many jobs at once there's always something going to fall through the cracks.

Doesn't Reed look a lot like the 30 rock guy.

ken-main.png


 
When you slash that many jobs at once there's always something going to fall through the cracks.

ROFL! However when you lose your most fundamental business acumen - "Geez, each widget X I sell requires a widget Y to work - I should make an equal number then.." - you've cut more than you can afford. I forget who makes AMD's chipsets - GF or TSMC or maybe some other foundry - so if spread out maybe that led to the miscommunications. But then they should have caught it much more quickly - as in a day or two - instead of 6 weeks or whatever it was..

Doesn't Reed look a lot like the 30 rock guy.

http://www.nbc.com/30-rock/images/bios/ken-main.png

Actually more like what I imagine Triny looks like - but I'll wait for Bourbon - er, Urban Legend's opinion 😀..

 
no reason to. I can't think of one benchmark that runs faster on the Athlon II than it does on the PII. complain about L3 all you want, it doesn't slow down the same family of cpus.

Stands to reason that a equal clocked PD will be faster than an equal clocked trinity, and that much faster than an equal clocked BD.

http://www.anandtech.com/bench/Product/188?vs=88&i=2.5.3.4.6.25.26.27.28.29.30.31.32.33.34.35.36.37.38.39.40.41.42.43.45.46


This is true but L3 cache just isn't that important i would rather have 3AGU/ALU over 2 with 6IPC per module vs 4 vs 8MB of slow L3 cache.

Not to mention it takes a lot of die space.

Within the same family L3 cache only matters by 5-10% but adds way more then 5-10% of die size just take a look at the Athlon II x4 die size vs the Phenom II x4.

Look at Intel's design gives you less L3 cache but performs way better then a Bulldozer.
 
no reason to. I can't think of one benchmark that runs faster on the Athlon II than it does on the PII. complain about L3 all you want, it doesn't slow down the same family of cpus.

Stands to reason that a equal clocked PD will be faster than an equal clocked trinity, and that much faster than an equal clocked BD.

http://www.anandtech.com/bench/Product/188?vs=88&i=2.5.3.4.6.25.26.27.28.29.30.31.32.33.34.35.36.37.38.39.40.41.42.43.45.46

Yes, the L3 is the ONLY differences between an Athlon and Phenom II. RIGHT...

The L3, frankly, doesn't offer a huge performance boost one way or the other. We're talking saving maybe 20 CPU clock cycles if the data is in the L3 instead of RAM. Its an incredibly minute performance benefit at the cost of a lot of heat, die space, and cost. If the L3 had a 50 clock cycle access time, then maybe it would be worth it, but I view it as simply taking up useful space on the die.

This applies equally to Intel or AMD. I don't see the L3 as being quick enough to justify all the die space it takes up.
 
So...remember when AMD had a clean sweep of console HW?

http://www.eurogamer.net/articles/digitalfoundry-the-curious-case-of-the-durango-devkit-leak

Beyond that, further information is sketchy and unreliable. DaE reckons that the current devkits were dispatched to studios in February, and feature Intel CPUs and a graphics card that carries the NVIDIA brand - but he doesn't identify either part more specifically. He also claims that the Durango kit features more than 8GB of memory (other sources have suggested 12GB), and that it is 64-bit in nature - at this point it's worth bearing in mind that dev hardware typically features double the RAM of retail kit in order to accommodate debugging tools and other systems. DaE also says that Microsoft is targeting an eight-core CPU for the final retail hardware - if true, this must surely be based around Atom architecture to fit inside the thermal envelope. The hardware configuration seems difficult to believe as it is so divorced from the technological make-up of the current Xbox 360, and we could find no corroborative sources to establish the Intel/NVIDIA hook-up, let alone the eight-core CPU.

Take it for what you will. Just goes to show why I've ignored all the console HW talk for the most point at this stage.
 
Yes, the L3 is the ONLY differences between an Athlon and Phenom II. RIGHT...

It is the only difference between an Athlon *II* and a Phenom II, which is what noob2222 said.

The L3, frankly, doesn't offer a huge performance boost one way or the other.

From the standpoint of the PC enthusiast, having *up to* 17% more performance (see the links from Tom's and Anand's that I posted on the previous page) seems pretty fair to me -- especially when the gains are much more consistent in gaming.

The point is that if your main concern is about performance, then you'd rather want your Piledriver cores to have plenty of L3 than to not include it at all. In Tom's review, the difference in peak power consumption between the Athlon II and Phenom II systems was roughly 25% in favor of the former; but it didn't account for the 800 MHz (2.4 GHz vs 3.2 GHz) that the Athlon II was missing.
 
So...remember when AMD had a clean sweep of console HW?

http://www.eurogamer.net/articles/digitalfoundry-the-curious-case-of-the-durango-devkit-leak

Beyond that, further information is sketchy and unreliable. DaE reckons that the current devkits were dispatched to studios in February, and feature Intel CPUs and a graphics card that carries the NVIDIA brand - but he doesn't identify either part more specifically. He also claims that the Durango kit features more than 8GB of memory (other sources have suggested 12GB), and that it is 64-bit in nature - at this point it's worth bearing in mind that dev hardware typically features double the RAM of retail kit in order to accommodate debugging tools and other systems. DaE also says that Microsoft is targeting an eight-core CPU for the final retail hardware - if true, this must surely be based around Atom architecture to fit inside the thermal envelope. The hardware configuration seems difficult to believe as it is so divorced from the technological make-up of the current Xbox 360, and we could find no corroborative sources to establish the Intel/NVIDIA hook-up, let alone the eight-core CPU.

Take it for what you will. Just goes to show why I've ignored all the console HW talk for the most point at this stage.

Most of the speculation regarding console hardware does turn out to be rubbish, indeed. It would be surprising to see Nvidia completely missing in the next generation.
 
This is true but L3 cache just isn't that important i would rather have 3AGU/ALU over 2 with 6IPC per module vs 4 vs 8MB of slow L3 cache.

Not to mention it takes a lot of die space.

Within the same family L3 cache only matters by 5-10% but adds way more then 5-10% of die size just take a look at the Athlon II x4 die size vs the Phenom II x4.

Look at Intel's design gives you less L3 cache but performs way better then a Bulldozer.

The first line from your comment is pure speculation on your part, whereas the presence of L3 does add up to 17% more performance. Actually, by looking at Anand's review, the "Productivity" test suite shows a 22% advantage for the Phenom II in comparison to the L3-cacheless Athlon II. But, of course, the results won't always be as significant as that.


 
Most of the speculation regarding console hardware does turn out to be rubbish, indeed. It would be surprising to see Nvidia completely missing in the next generation.

From MS's perspective, since it appears they want to unify their HW/SW platform, moving to X86 makes sense, especially if the new Xbox will be running some version of the Win8 Kernel. Looking at CPU's, theres really three choices, but two of them are unacceptable:

1: SB/IB: Unlikely. Intel controls production, and intel is selling them to consumers like hotcakes. Not likely enough could be created by intel to meat MS's demand.
2: BD/PD: Possible, but unlikely. Architecture not very well suited to embedded space [low IPC], and power/heat is a concern.
3: Atom: Most likely; meats thermal constraints and produced on older nodes (45nm/32nm) that are more mature then newer CPU's, increasing yeilds.

From a architecture/power draw/yield perspective, Atom has always been the logical choice if MS decided to move to X86.

As for GPU's, I've always views the AMD 5/6000/NVIDIA 4/500 series GPU's as most likely. Any lower, you drop OGL 4.0 HW support, and any higher, you run into yield issues.
 
From MS's perspective, since it appears they want to unify their HW/SW platform, moving to X86 makes sense, especially if the new Xbox will be running some version of the Win8 Kernel. Looking at CPU's, theres really three choices, but two of them are unacceptable:

1: SB/IB: Unlikely. Intel controls production, and intel is selling them to consumers like hotcakes. Not likely enough could be created by intel to meat MS's demand.
2: BD/PD: Possible, but unlikely. Architecture not very well suited to embedded space [low IPC], and power/heat is a concern.
3: Atom: Most likely; meats thermal constraints and produced on older nodes (45nm/32nm) that are more mature then newer CPU's, increasing yeilds.

From a architecture/power draw/yield perspective, Atom has always been the logical choice if MS decided to move to X86.

As for GPU's, I've always views the AMD 5/6000/NVIDIA 4/500 series GPU's as most likely. Any lower, you drop OGL 4.0 HW support, and any higher, you run into yield issues.

But wouldn't Zacate (or even Trinity) be a viable contender alongside Atom in the x86 space? Also, with Windows 8 bringing WinRT to light, wouldn't ARM be at least a possibility?
 
From MS's perspective, since it appears they want to unify their HW/SW platform, moving to X86 makes sense, especially if the new Xbox will be running some version of the Win8 Kernel. Looking at CPU's, theres really three choices, but two of them are unacceptable:

1: SB/IB: Unlikely. Intel controls production, and intel is selling them to consumers like hotcakes. Not likely enough could be created by intel to meat MS's demand.
2: BD/PD: Possible, but unlikely. Architecture not very well suited to embedded space [low IPC], and power/heat is a concern.
3: Atom: Most likely; meats thermal constraints and produced on older nodes (45nm/32nm) that are more mature then newer CPU's, increasing yeilds.

From a architecture/power draw/yield perspective, Atom has always been the logical choice if MS decided to move to X86.

As for GPU's, I've always views the AMD 5/6000/NVIDIA 4/500 series GPU's as most likely. Any lower, you drop OGL 4.0 HW support, and any higher, you run into yield issues.

I'd be incredibly surprised if Intel gets into the console market. The margins are razor thin.
 
http://www.dvhardware.net/article55308.html
^I like this. This should give a good upgrade path to budget builders. Get a low end, but discrete, gpu, and get a quad core Athlon II for less than a similar Trinity model. Down the road, when Steamroller comes along, you can upgrade to that, and always have the option for gpu upgrades. You could do the same on an Intel platform, but at least AMD has a similar option now.

http://news.softpedia.com/news/AMD-s-1090FX-Chipset-Comes-in-2013-with-Steamroller-Support-283650.shtml
New chipset with Steamroller, I hope for some good improvements.


I'd be incredibly surprised if Intel gets into the console market. The margins are razor thin.
Profits are minimal, but you would have increased sales, so you could make it look good on paper. 😉
 
http://www.dvhardware.net/article55308.html
^I like this. This should give a good upgrade path to budget builders. Get a low end, but discrete, gpu, and get a quad core Athlon II for less than a similar Trinity model. Down the road, when Steamroller comes along, you can upgrade to that, and always have the option for gpu upgrades. You could do the same on an Intel platform, but at least AMD has a similar option now.
http://news.softpedia.com/news/AMD-s-1090FX-Chipset-Comes-in-2013-with-Steamroller-Support-283650.shtml
New chipset with Steamroller, I hope for some good improvements.
ah, am i missing something about the upgrade path?
preparing three Athlon II X4 processors with a Socket FM2
Vishera’s complete compatibility with today’s AM3+ platform
:??:
 
But wouldn't Zacate (or even Trinity) be a viable contender alongside Atom in the x86 space? Also, with Windows 8 bringing WinRT to light, wouldn't ARM be at least a possibility?

WinRT is a possibility, but if you work on the theory MS wants their console to have easy PC connectivity, it seems silly they wouldn't go X86.

As for Zacate/Trinity: Embedded systems [which consoles are] are very different from PC's when ti comes to hardware requirements. Different types of workloads are expected then your generic PC. As performance will not improve over the systems lifetime, you have to have headroom to gain performance through software optimization over time. You care how quickly you can access main memory, you care how many clock cycles it takes to overcome a pagefault, you care how many cycles it takes to access the GPU framebuffer. On a PC, any performance loss due to these factors is negligable, on a console, its a major cause for concern.

BD/PD cores are low IPC, high power [for their speed]. Not a good combination for an embedded system. And lets face it, the built in GPU isn't exactly top tier. And you have TSMC's yield issues to consider; does MS really believe TSMC could ramp up production by a few million chips with acceptable yields?

Hence, Atom is a FAR more attractive option for MS.
 
http://semiaccurate.com/2012/01/18/xbox-nextxbox-720-chips-in-production/

So, time for a little speculation. Oban is being made by IBM primarily, so that almost definitively puts to bed the idea of an x86 CPU that has been floating. We said we were 99+% sure that the XBox Next/720 is a Power PC CPU plus an ATI GCN/HD7000/Southern Islands GPU, and with this last data point, we are now confident that it is 99.9+%. Why? Several licensing agreements that cover what can be made where will enrich a fleet of lawyers if Oban is x86, but do not preclude the possibility entirely, hence the last .1%.
 
WinRT is a possibility, but if you work on the theory MS wants their console to have easy PC connectivity, it seems silly they wouldn't go X86.

As for Zacate/Trinity: Embedded systems [which consoles are] are very different from PC's when ti comes to hardware requirements. Different types of workloads are expected then your generic PC. As performance will not improve over the systems lifetime, you have to have headroom to gain performance through software optimization over time. You care how quickly you can access main memory, you care how many clock cycles it takes to overcome a pagefault, you care how many cycles it takes to access the GPU framebuffer. On a PC, any performance loss due to these factors is negligable, on a console, its a major cause for concern.

BD/PD cores are low IPC, high power [for their speed]. Not a good combination for an embedded system. And lets face it, the built in GPU isn't exactly top tier. And you have TSMC's yield issues to consider; does MS really believe TSMC could ramp up production by a few million chips with acceptable yields?

Hence, Atom is a FAR more attractive option for MS.
MS wants to keep the console a closed system. No modification and definitely not run any user generated code if possible. Its extremely simple to connect the console to PC either way since they both use the same network protocols. Software sales and xbox live is where MS makes most of its money from the console. They aren't going to make it so easy to emulate/pirate games buy using x86 code paths. x86 also has a lot of overhead that is not needed with all the legacy support, in the end you end up with an easily cracked and inefficient chip. Powerpc worked well in the 360, no reason for microsoft to need to switch to anything x86.

Everything else you said about piledrive/bulldozer is pretty much just BS. HSA can easily be implemented and improved on a console environment for an APU, and they offer great performance/watt. The integrated gpu doesn't have to be the only gpu unit in a console since it could easily be made to do physic while a main card does rendering. Crossfire with a main card could also work great since you can build it to have access to a unified memory system as well as much more bandwidth than a PC. This is what is rumored for the PS4 but unless much of the x86 code is modified, piracy will just kill the console.
 
Upgrade to Steamroller, the successor to Piledriver (Vishera's), which is going to be on the FM2 socket.

I don't think there has been any confirmation that Kaveri (which is based on Steamroller and shall replace Trinity) will use the FM2 socket.

The only thing I remember from AMD presentations is that, in the Performance segment, there won't be a replacement for Vishera in 2013.

http://wccftech.com/wp-content/uploads/2012/02/amd_fad2012_cr04-635x351.jpg

The links you posted don't seem to offer any confirmation on Kaveri's future socket.

 
MS wants to keep the console a closed system. No modification and definitely not run any user generated code if possible. Its extremely simple to connect the console to PC either way since they both use the same network protocols. Software sales and xbox live is where MS makes most of its money from the console. They aren't going to make it so easy to emulate/pirate games buy using x86 code paths. x86 also has a lot of overhead that is not needed with all the legacy support, in the end you end up with an easily cracked and inefficient chip. Powerpc worked well in the 360, no reason for microsoft to need to switch to anything x86.

But remember, MS is pushing BIG on using one platform for everything. The other major issue is on the PPC front, you really can't do too much better then what the 360 already has inside it. And lets not forget that most 360 games happen to have a Windows version already...If the new Xbox runs a modified Win8 kernel, why wouldn't it be hard to imagine MS releasing a slightly modified PC version of its games over XBL?

I can see reasons for both sticking with PPC and moving to X86. MS is far more likely to move to X86 then Sony though...

http://semiaccurate.com/2012/01/18/xbox-nextxbox-720-chips-in-production/

Aren't roumers fun?



Two GPU's greatly increases costs and heat output, two things MS doesn't want. Sony tried that when they kept the PS2 hardware in the PS3, and lost a tone of money as a result. Not happening. Anyone who ever thought any console would ship with two GPU's doesn't have a clue what they are talking about.
 
The first line from your comment is pure speculation on your part, whereas the presence of L3 does add up to 17% more performance. Actually, by looking at Anand's review, the "Productivity" test suite shows a 22% advantage for the Phenom II in comparison to the L3-cacheless Athlon II. But, of course, the results won't always be as significant as that.


I rarely go by "Up to" statements L3 cache for client Computers its a complete waste of space. Up to Statements are purely for marketing in my eyes, i'm a average performance benefit Guy and L3 cache is a 5% boost or less under 90% of apps.

And it does take quite a bit of the die that can be used for other things Such as Better per core performance.
 
It depends on how many Steamrollers, they roll out ...

They will certainly do a trinity-like version with an igpu since steamroller is the first true HSA attempt. Wether they do one that fits on the AM3+ socket is yet to be ... who knows.

I rarely go by "Up to" statements L3 cache for client Computers its a complete waste of space. Up to Statements are purely for marketing in my eyes, i'm a average performance benefit Guy and L3 cache is a 5% boost or less under 90% of apps.

And it does take quite a bit of the die that can be used for other things Such as Better per core performance.

The thing is one program that does very much recieve a boost from L3 cache is .... 90% of all GAMES . http://www.xbitlabs.com/articles/cpu/display/phenom-athlon-ii-x2_7.html

If your building a gaming computer, you don't want a chip that doesn't have L3 cache, you pretty much across the board crippled yourself.

mostly it looks like only video-editing is unaffected.
 
Status
Not open for further replies.