AMD Piledriver rumours ... and expert conjecture

Page 218 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 


How well can it OC I wonder?
 



40% is nothing. Rest is just formalities that happen every day.
 

It will close the gap but not really by enough to be a large factor. Sandy -> Ivy was very small, Bulldozer->Piledriver would be guaranteed to be that much of an improvement from just the clock increase. Depending where you look, the FX8350 should be as powerful or more powerful as a i7 3770k in things like video conversion and rendering while it will be slower than an i3 at gaming most likely.
 
And now, we find FPS is a horrible benchmark to use for gaming:

http://techreport.com/articles.x/21516/1
http://techreport.com/articles.x/23246

Granted, I knew some of this, but amazing to see and track the per-frame latencies. Second of the two is most interesting, as you can easily pick out the higher latencies on AMD's chips...

skyrim-fps.gif


Remember how we always wondered why the PII X4 pulls ahead of both the X6 and FX? Now we know:

skyim-99th.gif


Looks to me like the FX CPUs have an Amdahl's Law problem. Even though they have a relatively large amount of cores for their given product segments, their per-thread performance is fairly weak. The Bulldozer architecture combines relatively low instruction throughput per cycle with clock speeds that aren't as high as AMD probably anticipated. That adds up to modest per-thread performance—and even with lots of cores on hand, the execution speed of a single thread can limit an application's throughput.

*whistles*

But its REALLY an interesting read. Kinda makes you re-think about how valid FPS is as a benchmarking tool...Its also worth noting that no processors can spit out a "steady" 60 FPS [16.7ms] rate...
 
^^ Well, my point is basically to show that while the AVERAGE performance over a second looks flat, frames are hardly comming out at a steady rate. Within that second, there are frames spat out early, frames that are delayed, hence the famous "microstutter" that you see in dual-GPU configs. The first article even notes NVIDIA has a ton of HW specifically to battle this.

So yeah, frame latencies, on both the GPU and CPU side, are probably the gaming benchmark I've been looking for. Does an overall better job then low resolution testing does of showing how much extra oomph one CPU has over another.

Proposing Toms does its own investigation into this area.

EDIT:

value.gif


The A8-3850 and i5-3570k come out as the two big winners in this area.
 



Their is a lot of blue up their and the green is on the bottom. :kaola:
 

Isn't that part of what Vsync is for? Though, once you dip below the same fps as your monitor is, Vsync is a liability.

In some heavily threaded tasks, I think the 8 core PD will be better than the 4c/8t i7's. BD is already close, and PD is quite a step. In single threaded and games, PD still falls short of SB/IB, but better than PII, if I had to guess.
 
don't just speak on it, prove it then.....
let me know when your ready to realize the truth as well
😉

without proof then your words carry no weight.
are you on a fanboi kick too?

I do believe that you are the Intel fanboi for bagging on everyone in this forum about being a AMD fanboi. I highly doubt that the AMD Piledriver will come close to the new Ivy Bridge CPU myself. But this is a forum to discuss what we would like the Piledriver to be, is it not?
 


When I was running benchmarks between the 8-core Bulldozer and Intel i5-2500 the AMD system scored 10K on a multi-threaded application and the Intel scored around 8K. Single threaded performance is where it is at right now. Until the day AIM and Notepad will use all eight core... 😱 ... Intel has AMD beat.
 


Yes and no, vsync is supposed to "lock" the frames output to your monitor's reported frequency, but it doesn't state "how". That's why Lucid and nVidia have implemented very different approaches to do so. Also, each game should implement it to fit their own input polling, because doing it from the driver will sometimes cause way too much pain.

Unless I'm remembering it wrong (gamerk can correct me here), using vsync is not the same across the board for all games.



Even if PD is equal clock per clock to PhII, if it OCs well enough it can be a worthy "upgrade"... I think, haha.

Cheers!
 
Hey guys I have a question for you

Now that Piledriver "officially" launch is in october, there will be new chipset ??

As far as I know 990X/99FX doesn't have PCI 3.0 capability, yet no card fully uses the bandwich of PCI 2
 



Overclock is a moot point. A good overclock usually adds less than 10% increase in actual performance. Archie makes all the difference.
 


Yep - from OBR's website which was discussed a couple pages ago. IIRC he was the guy who leaked internal AMD docs showing BD would miss the target performance by a big margin, around this time last year.
 


Not quite. Usually, a 500Mhz clock increase can do a lot for min frame rates. Also, it depends on the "default" clock speed from the CPU how good of a bargain it is. And notice the i5 k are the most recommended CPUs because of just that, they're highly OCable.

With my 965, is from 3.4 to 3.97Ghz, that's a tad less than a 600Mhz increase (with all power savings activated). Or a little more of 1/6th of the default's clock. Considering at 4Ghz it performs close to a 3.1Ghz i5 (2.8 being the default and 3.1 the quad bump), PD will need a lot higher clocks to even dream to get close to a OCed 2500k, let alone a 2600k for multi.

This doesn't paint the whole picture, but helps me demonstrate my point:

cine_3.98ghz_TuneUP_64bits.jpg

cine_i5-2300_64bits.jpg


I wish I had some sort of sample data for a game, but Cine actually reflects most of what happens in the games world (at least in most benchies) from what I've noticed.

Cheers!
 


Yes, thats kinda my point.
 


Thought it was a buck sixty-three, but not important :). The worrisome part would be the low guidance AMD provided last quarter, plus the fact that so far no PD (shades of BD a year ago), plus Intel threatening with Haswell.

The senior notes show AMD is doing what it has to to continue in business - not a good thing but at least they are not declaring Chapt. 11 or anything. People (more likely corporations) buying these are a bit more secure than ordinary stockholders, because the notes are "senior" to any other equity such as stock certificates, so the note-holders would get paid first in event of a bankruptcy and liquidation (unlikely IMO at this point).

If there should be a global crisis (and my personal opinion would be Israel going at it with Iran, in the next couple months, > 50% probability, and shutting down a large part of global oil shipments for some time), then all bets are off of course. There could be some fairly long-lasting consequences and weaker companies dying off.
 


Well you asked "Are the stocks losing 50% of value in a quarter?" so that's about what I did 😀.

And if you go by stock value since March 26th ($8.24), it's more than 50% - 53% to be precise, at todays close of $3.85. That would be 5 months BTW.

And no, having to sell higher-interest senior notes to meet previous notes, is not common at all. IIRC the original notes, due this year, were convertible to ordinary stock if the stock price hit $20 a share. Unlikely at this point.

BTW, all this is 'been here, done that': http://www.tomshardware.com/forum/264129-28-loss-worse-expected-stock-slumps-after-hours-trading
 
don't just speak on it, prove it then.....
let me know when your ready to realize the truth as well
😉

without proof then your words carry no weight.
are you on a fanboi kick too?
I responded with as much evidence as your post, why be mad at me?

http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/47155-amd-bulldozer-fx-8150-processor-review-21.html
here is FX8150 vs 1100t at stock, its pretty even. Taking out all the synthetic tests, the 8150 wins vs the 1100t.

Now I say this back to you:
without proof then your words carry no weight.
are you on a fanboi kick too?
 



40% isnt 50%. 5 months is not a quarter. Deal with it.
 


Well, my point is not about Intel vs AMD; that's the real moot point for me now... Kinda got strayed away there, but i wanted to show that OC does play a factor; a big one done right; just need where to measure it to really notice it.

Regarding PD, I'm just saying: Ok, it seems to clock a tad better (according to what Trinity showed) than BD and it seems to have narrowed (hopefully, REMOVED) the clock per clock disadvantage with the PhII. So, that brute-force speed increase is a welcome one, for PD will be the last AM3+ CPU from AMD (according to official wording, so far).

The FX83xx doesn't have to overcome the i5 2500k (nor the bigger siblings) to fit a lot of folks tastes here, but just overcome PhII steadily at games AND "heavy threading" tasks to make it a good CPU for a (let's be clear here) dinosaur platform. All that with no need for a nuclear reactor to power it, off course. That's not so idiotic to ask, right?

Cheers! 😛
 

Not at all. If BD was more power efficient than it is, it would have been accepted more easily. I thought about overclocking my 8120, but I didn't like the idea of having all that heat in my room. Funny though, I bought a nice big air cooler only not to put it to good use yet. I hope PD is a good overclocker.
 
Status
Not open for further replies.