AMD Piledriver rumours ... and expert conjecture

Page 195 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 
Well, to be honest, I was more depressed by my slower clocked X6 beating it (so intel optimizations shouldn't hurt there). Have you heard of any improvements to PD that might help the FPUs work more efficiently (so, beyond just clock speed increases)?


Honestly I doubt it. The issues aren't so much with the FPU units themselves but the scheduling / caching components that stall everything else out. There exists the possibility that AMD has made significant headway on those components, though I doubt it.

Best thing AMD could do for now is to split the L2 cache and give each "core" dedicated access to 1MB. And while that wouldn't fix the scheduling issues, it would fix the latency and cache access arbitration that plagues the design.
 
The depressing part for me at least is that although PD seems to be making some good gains over BD, it is still severely hampered by its shared FPUs in float heavy computations. It (BD) is barely competitive in float with the X6 and 1st gen i7, and when you compare it with SB, Ivy, or Gulftown it is blown away. I think AMD really jumped the gun with their slashing of float resources. Yes, GPUs do offer amazing float performance, but they are currently severely hampered by limited ram and slow PCI-E and thus are only optimal for certain problems. And while software is being written to take advantage of GPUs and coprocessors, it will be a long time before it is really prevalent, at which point BD and PD will be long forgotten. APUs have an interesting future, but they are even more 'bleeding edge' than discrete GPU processing and thus are even farther out from full acceptance. Not to mention, I can't get a 4 module BD or PD with a graphics core for float, so why slash the float units in the CPU?
Do we know when AMD will be using gpu as a floating point processor? I think that is going to be with Steamroller, but I haven't heard anything on the timing for that yet.


Reading the articles, I am seeing A LOT of people bashing AMD for changing to a new socket next generation. Did they all just miss Intel do the same thing for 3 generations strait? I guess the 1 generation break makes Intel so much nicer to their customers. :lol:
 
But hasn't it been said that this clock mesh tech also is only effective up to a certain speed? BDs problem was not that it used too much power. At stock it was not that bad for what it is. It was when overclocked it would jump to almost 2x the power us for the system.



Memory speed is in no way holding either back. Well not Intel at least as has been shown there are almost no noticeable improvements from DDR3 1333 to DDR3 1866.

What will help is stacked RAM as it will cut latency down to near nothing.


I was talking about trinity and in this regard yes memory speed is holding back GPU performance on the APU without a single doubt.
 
BD's are hampered by FPU performance due to their shared front end and L2 cache system. A single BD module does have two 128-bit FPU's with each one assigned to a different "core". They can be used to process a single 256-bit instruction or two separate 128-bit instructions. Due to the front end scheduler arbitration rarely will both FPU's be used at the same time unless you deliberately separate the FPU calculations into different threads, and then you run into the caching situation.

Also make 100% sure that your not using something that's been linked to Intel's math library. Non-Intel CPU's might as well not even have modern SIMD FPU's then.


Correct! And why do people keep saying memory speeds don't matter take a look at 1333 speeds vs 2133 on Llano sure on the CPU part it doesn't matter much but on the GPU part it does. Look at 6670 DDR3 vs DDR5 benchmarks.
 
But why is the turbo still at 4.2GHz, i mean that's barely 5%?!!

A 4.6GHz single module turbo sounds right :lol:


If the 8 core is releasing at a 4.0Ghz stock i wonder were their 4 and 6 core will be releasing at?

Now Amd please price it right this time.

Also not to upset anyone but i would take this site with a grain of salt and then cut the grain of salt in 4 and only use 1/4 of it this site is almost a troll of its own.

I'm also pretty sure we rarely use the FPU in today's games/encoding and even rendering but i'm not 100% sure but i thought my A+ book said we only use the Floating point for less then 7-8% of CONSUMER(NOT SERVER) applications.

OpenCL will only make this number smaller and The real reason Bulldozer is so slow is because of Cache speeds and ALU/AGU being only 66% of what it was on the Phenom II not to mention IPC is down from 3 to 2 per core in each module if only one thread is being used on 1 module i believe it can use all 4 but how often does that happen and their wont be much benefit because of the AGU/ALU units being 33% less.

On top of that BD was supposed to be released with a higher clock speed the site that compares a A10 to a I7 even said this. yes Anandtech which is on a mission to make Amd look bad by comparing a A10 to a I7 Dumb Intel lovers this is why i love toms More wayyyyyy more compares Trinity with its ACTUAL competition which will be a I3 not a I7 Dumb asses.
 
On top of that BD was supposed to be released with a higher clock speed the site that compares a A10 to a I7 even said this. yes Anandtech which is on a mission to make Amd look bad by comparing a A10 to a I7 Dumb Intel lovers this is why i love toms More wayyyyyy more compares Trinity with its ACTUAL competition which will be a I3 not a I7 Dumb asses.

Are we talking about this review? The A10 is compared to i3, i5 and i7.
 
The next 3D memory is code named HBM by JEDEC.

http://www.jedec.org/category/technology-focus-area/3d-ics-0

"The High Bandwidth Memory (HBM) task group in JC-42 has been working since March 2011 on defining a standard that leverages Wide I/O and TSV technologies to deliver products ranging from 128GB/s to 256GB/s. The HBM task group is defining support for up to 8-high TSV stacks of memory on a data interface that is 1024-bits wide. This interface is partitioned into 8 independently addressable channels to support a 32-byte minimum access granularity per channel. The specification is expected to be completed in late 2012 or early 2013."

Which is twice the width and adds 4 layers to the WideIO spec.
 
Total BS, That is really all i have to say some times it doesn't matter how much proof some one has some people are in 100% denial. Also show this other site and i'll take a look at it.

http://www.techspot.com/review/452-amd-bulldozer-fx-cpus/page11.html

one of the 2 websites is not telling the truth, so how can you say "how much proof do you need" when one site sais lower the other sais higher.

logic should tell you that running 1/2 the cpu will draw less power even if your running it slightly faster. (8150 vs 4170)

Power.png


A little note, when you hit the reply button to something like this you get

noob2222 wrote :


Looks like PD is already doomed. This one benchmark on this one website showed that a dual module cpu is slower than a quad core cpu in this one particular benchmark and the dual module cpu draws just as much power as the quad module big brother when running Their "burn in test"


isn't it funny that the 8150 and the 4170 have the same power draw?


http://static.techspot.com/article [...] /Power.png


198W to 252W ... or 212 to 214... So which one is wrong, they can't both be right...

The website shows up ... techspot.com ...
 
Like i said that is a 8150 with disabled modules and a 4.2Ghz clock in that review if its power consumption numbers are 100% right i don't know but Amd could use lower end 8 core Bulldozers and lock 2 modules and increased the clock rate. Just saying its possible, if i had to pick between the 2 sites i would say Techspot is more reliable. I almost feel like buying one to test it
 
The 8120 turned out to be a pretty good option after they did price cuts. If someone had an AM3+ system with an older PII in it, I could see the 8120 being a decent upgrade for them.

Problem is most of us just use the CPU for gaming. And its not better then the x4/x6 chips when you look at that.

http://www.anandtech.com/show/4955/the-bulldozer-review-amd-fx8150-tested/8

Get a win here and there, ties as well. But most of the time its on the bottom looking up at the x4/x6 chips. Flip the next page and look at the power consumption under load and its even worse. When gaming not only are you slower, but you are using more power to do it.

For using photoshop or video converting then yes its faster. But I'd bet for most of us web browsing, watching "TV", and gaming are our biggest users of computer time.
 
The 8120 turned out to be a pretty good option after they did price cuts. If someone had an AM3+ system with an older PII in it, I could see the 8120 being a decent upgrade for them.

Problem is most of us just use the CPU for gaming. And its not better then the x4/x6 chips when you look at that.

http://www.anandtech.com/show/4955/the-bulldozer-review-amd-fx8150-tested/8

Get a win here and there, ties as well. But most of the time its on the bottom looking up at the x4/x6 chips. Flip the next page and look at the power consumption under load and its even worse. When gaming not only are you slower, but you are using more power to do it.

For using photoshop or video converting then yes its faster. But I'd bet for most of us web browsing, watching "TV", and gaming are our biggest users of computer time.

I like that analysis. I think I'm going to print it out and stick in on my wall to stop myself upgrading to the FX-series via impulse buy 😛.
 
I like that analysis. I think I'm going to print it out and stick in on my wall to stop myself upgrading to the FX-series via impulse buy 😛.

It's even easier... Get a Bulldozer pic on your wall and draw a big X on it with this legend: "Big and slow" 😛

If PD doesn't deliver, you can change the pic and keep the legend, haha.

Cheers!
 
I like that analysis. I think I'm going to print it out and stick in on my wall to stop myself upgrading to the FX-series via impulse buy 😛.

Ive said this before but it depends on if youre going to overclock or not. I bought a 8150 to replace my 1090 because Im an upgrade addict and I wanted something new to play with for overclocking. My 1090 topped out at 4 GHz. At stock clocks, the X6 just trounced the FX. Both at 4 GHz the X6 still was faster but not running away with it. At 4.3, my FX is now faster albeit just by a little (my WEI score is now 7.8 up from 7.6! Woohoo! 😀 ). I OC'd to 4.5 where it was stable to run a couple benchies but I dont want to keep it due to cooling but at 4.5, the FX is quite a bit faster than my X6. Once I get my H100 in a couple weeks, Ill crank it back up to 4.5 or better and leave it 24/7.

So if youre wanting to overclock and have good cooling, a FX is a nice little upgrade over your Phenom. If not, then stick with your Phenom cause it aint worth it. While my FX is an upgrade, Im not sure its worth the $160 of a 8120. For me it was because its a lot of fun to overclock which is mainly all I wanted and itll hold me over til Piledriver hits at which point Ill decide if I want to stick with AMD for another round or finally jump ship.
 
No way will I be able to reach 4.352Ghz, limited by my matx motherboard and hyper tx3 cooler not to mention RAM.

You might be surprised. Mine is running 4.3 on virtually stock voltage (1.37V). It wouldnt take a lot.

But if you're "iffy" on overclocking, then you should avoid Bulldozer cause its only better than Phenom at clocks in the mid 4's.
 
amd piledriver was announced that their chips will give 10-15% increase in ipc performance but i think still the piledriver will lag behind core i5's because if u see the price to price comparison of i3 2100 and a8 ,i3 tops at gaming..i'm asking this because i'm going to buy a budget gaming mouse logitech g300 or razer abysssus if i buy a an amd system i would like all system colors to be red so i'm waiting for the release of fx piledriver benchmark just for the purchase of a mouse..
 
You might be surprised. Mine is running 4.3 on virtually stock voltage (1.37V). It wouldnt take a lot.

But if you're "iffy" on overclocking, then you should avoid Bulldozer cause its only better than Phenom at clocks in the mid 4's.

I'm not iffy, I just doubt I'll get past 4.2Ghz 😛

If it's a minor upgrade, I won't be able to justify the $160 price tag of a FX 8120, in the same way I know everything I want to play runs fine on my HD 4770 @ 1080p, high-med settings.
 
Status
Not open for further replies.