AMD Piledriver rumours ... and expert conjecture

Page 143 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 
I'm not at the level most people reading this stuff are, but to complain the FX has higher clocks isn't really fair. It's like saying comparing any two chips with different stock clocks is unfair, no? The FX comes at a lower stock clock, but I thought its turbo can push it to higher at stock than the 2500k. Perfectly fair I would have thought. I think we should be comparing what chips can do against each other, not what they can do against each other at a certain clock speed. The FX has twice as many cores, but people don't complain about that being unfair. Should be compared bang for buck I think.

There are a few things here.

First, the FX is actually stock clocked higher than a 2500k. The 2500K is 3.3GHz, the closest FX8 is the 8140 at 3.2GHz, the 8150 is 3.6GHz. The turbo works based on core load, so not all cores get the turbod boost.

but what the main issue is that it takes higher clock speeds to equal or beat the 2500K in some cases. Back when AMD had the Athlon 64 and Intel had the Pentium 4, the Athlon 64 3200+ was only clocked at 2GHz but beat a 3.2GHz CPU therefore it created less heat and used less power.

Of course things are changing but that still makes a difference to some people.

recon If I called you a three year old how would you respond?

Lets clean up our posts - ditto for you noob - and move on.

Jimmy is keen to hand out some bans so don't give him cause to do so.

So far I have only banned spammers. I really want to forego bans but come on guys, keep it clean. Even if someone pushes you, just ignore it. They are only 1s and 0s translated to pixels translated to words. Nothing major to get in a huff or puff about.
 
There are a few things here.

First, the FX is actually stock clocked higher than a 2500k. The 2500K is 3.3GHz, the closest FX8 is the 8140 at 3.2GHz, the 8150 is 3.6GHz. The turbo works based on core load, so not all cores get the turbod boost.

but what the main issue is that it takes higher clock speeds to equal or beat the 2500K in some cases. Back when AMD had the Athlon 64 and Intel had the Pentium 4, the Athlon 64 3200+ was only clocked at 2GHz but beat a 3.2GHz CPU therefore it created less heat and used less power.

Of course things are changing but that still makes a difference to some people.
Ah, I was looking at the 8120 I think- I thought its stock was 3.1. In any case, I'd like to think the point still stands. Surely one can't complain because the fx was clocked higher than the i5- especially not if at stock it has a higher clock anyway.
 
I'm not at the level most people reading this stuff are, but to complain the FX has higher clocks isn't really fair. It's like saying comparing any two chips with different stock clocks is unfair, no? The FX comes at a lower stock clock, but I thought its turbo can push it to higher at stock than the 2500k. Perfectly fair I would have thought. I think we should be comparing what chips can do against each other, not what they can do against each other at a certain clock speed. The FX has twice as many cores, but people don't complain about that being unfair. Should be compared bang for buck I think.


Not trying to drag things on but it does matter, and in a way hurts amd's reputation by calling this a "8 core". It is a 8 core but compared to a 4 core with HT and a lower clock rate it looks sad and the fact that it has a turbo makes it look worse. Its not possible to Clock CPU's at unlimited clock rates this is why IPC or CPI is so important just like you can't always add more cores and be done with it as software does not scale like this and you will have a bigger die size(just like how BD is bigger then sandy) and this means more heat and power consumption as well as lower profits for Amd Or(and) higher prices for us.

This is all Common knowledge in the CPU business and Thank god Amd engineers recognize this and know it's a issue and want to improve things.
 
Ah, I was looking at the 8120 I think- I thought its stock was 3.1. In any case, I'd like to think the point still stands. Surely one can't complain because the fx was clocked higher than the i5- especially not if at stock it has a higher clock anyway.


You have to understand that this is not a efficient way to do this, It's just not possible to keep clocking CPU's higher and higher at least not in today's environment and its just sad to see Amd needs more cores and a higher clock rate to Equal Intel, It's bad from a business standpoint, First it's harder to make higher clocked CPU's and keep the TDP at the wanted target and still meet demands, And it cost Amd more money since its a bigger die size.
 
You have to understand that this is not a efficient way to do this, It's just not possible to keep clocking CPU's higher and higher at least not in today's environment and its just sad to see Amd needs more cores and a higher clock rate to Equal Intel, It's bad from a business standpoint, First it's harder to make higher clocked CPU's and keep the TDP at the wanted target and still meet demands, And it cost Amd more money since its a bigger die size.
I whole heartedly agree with you that IPC matters. But if AMD can provide a chip for a decent price that can compete with Intel's i5, regardless of its larger die or supposedly eight cores, that's good enough for me. IPC is important but at the end of the day what matters is a good chip. I don't care how they get there, I just want an end product that's competitive.

And I think that bench you criticised shows that the fx chips can perform reasonably well for their price, even if overall inferior to a 2500k chip.
 
I would not call it "crap" so blatantly, because the FTC did in fact found Intel guilty.

Problem is that, like I said, people don't realize when a company (no matter the color or name) starts doing/pushing bad practices (in Intel's case, abusing dominant position) in the market, they start messing around with the hard earned money of the "common folk". And trust me, you and I (and like 99.9% of the forum) is under the "common folk" tag. Those practices must NOT be forgotten; maybe forgiven in favor of "moving along", but never forgotten or they'll do it again (bottom line, remember; screw the people). It's our duty as consumers to stop those behaviors and spread the word about it or companies will keep on doing it.

Anyway, you might think of it as a "mantra" that needs to be spread or information for someone that doesn't have details or just information, but just telling him to stop saying it just because you're tired of reading it... Maybe the good call is to just ignore it if you already understand and know it.

Cheers!

Pretty much this, whenever any company starts abusing their position NOTHING good can come of it, no matter who that company is.

Also please remember the FTC isn't a court system, their the government agency responsible for investigating anti-trust / anti-competitive / unethical business behavior. When they have findings they do one of two things, give the company a chance to correct their behavior through remedial actions, or take it to the justice department for a full blown investigation / trial. Typically they do the first part as nobody wants to pay the money nor waste the time on a trial, but if a company refuse's to comply they go to the 2nd part (see Microsoft). The FTC will then act as the plaintiff in the court case unless there was criminal activity discovered by the justice department, but that's a different can of worms entirely.

I have heard Intel had the stuff to fight it but still, it is over and should stay that way. AMD and Intel are done with it, lets move forward and keep it out of this thread as much as possible.

Not to be rude but you heard wrong. It wasn't AMD vs Intel, it was AMD + HP + Dell and a few others vs Intel, Michael Dell testified about the deals the Intel Senior VP forced on him. He even provided emails and memos from various parts of Intel about those forced offerings. Dell had on multiple occasions expressed their desire to go with multiple suppliers for their components, and each time Intel threatened to cut their supply if they did that. Dell more then AMD was pushing for that lawsuit. When your customers, the recipients of your "exclusive deals" are pushing the FTC then you know something is wrong.

The exclusive deals go something like this. Your a tier 1 OEM and have tens of thousands of orders to fill, your CPU supplier is Intel. They make you an offer you can't refuse, they'll discount the price on their CPUs by providing rebates on those CPUs (so far so good), with a catch. You must use them as your exclusive provided not only of CPU but also of motherboards and any other system components they offer, those are not discounted, in essence you must buy the entire package. Ok but not illegal yet. The offer comes with another caveat, their supply distribution is determined based on membership in these exclusive deals, should you deffer and decide to provide a non-Intel part anywhere in your offerings, you then get placed on the bottom of the supply list. In effect the 100,000 units they were going to provide you are instead given to your competitors, without those units you can't ship your orders nor meet your demand. Your customers then go to where they can get a computer, your competitors. In effect, any tier 1 who didn't sign up for Intel's plan would be driven out of the market. That is the part they were complaining about and what the FTC investigated (the rest was just added later).

Intel would of lost any court battle, Dell and HP together had more then enough evidence to demonstrate the anti-competitive behavior of Intel and how it was abusing it's market position. Intel chose to settle rather then gamble like Microsoft did, a gamble that they initially lost heavily, but got lucky with an appeals court. Originally MS would of had to be split apart into difference companies (OS / Office products), it would of been disastrous for them as a company that's biggest selling point is integration. The appeals court upheld the verdict but changed the ruling such that MS could stay as a single company but would have to ensure compatibility with other office products and allow integration of other products into the OS. That ruling heralded a changing mindset inside MS and the world is a better place for it.

Just imagine Intel losing and being forced to split their various divisions up by product, that is what the Intel exec's didn't want to gamble on.
 
And we're back to the "it's just higher priced than it should be" argument, hahaha.

Anyway, last thing I read about Trinity were the leaked slides from a TW site. It was scheduled for a Q3 2012 release, right?

Cheers!

PD is Q3 from what I have heard, Trinity should be coming up soon.

Still can't wait to see the benchmarks. Marketing slides are the ones to always take with a grain of salt the size of a brick. They always present the best case scenario, i.e. BD being compared in some apps to the 2600K and others to the i7 970.

They will always show what they want to show (Intel as well) which is their product being superior (although Intel does this vs their previous gen, not AMD while AMD does it vs Intel).
 
I'll believe benchmarks once it's live and places like Toms get their chance to take a crack at it.

On another note, my new DV6 just arrived in the mail today (takes forever to go from the states to here).

A8-3550MX
6GB memory (note at the bottom) (soon to be 8GB DDR3-1600)
7690M (re-badged 6770M)
1920x1080 screen
extended 6-cell battery

I'm happy.

*Note*
After taking the back cover off I've confirmed my suspicion. The "free upgrade" to 6GB includes one stick of 4GB DDR3-1600 memory and one stick of 2GB DDR3-1333 memory. This is exactly what my GF's notebook came with when it arrived. I'm going to swap the memory around between them, she gets the 2x2 DDR3-1333, I get the 2x4 DDR3-1600. Woot for a free upgrade to DDR3-1600 memory.

When I get a chance I'll reload it and do some benchmarking to see the difference between A8-3530mx and A8-3550mx / dGPU / ACF.
 
^So long as HP didn't biff the mobo again, like with the older DV6000/9000 and older DV6/7/9 series I hope it lasts for you (all of those were with nVidia chipsets though but had a flaw where the solder joints for the chipset would fail and either kill the mobo or need to be resoldered).

Still weird that they would put 1600 and 1333. Kinda strange they put 1600 at all. Normally OEMs like HP and such do the cheapest memory they can and most I have had my hands on have 1333 RAM, even Llano based notebooks.
 
I don't think even HP realized what they were putting in them. Most likely they were oversupplied on DDR3-1600 and rather then have them sitting in a warehouse just to be taken as a loss they decided to use them as "free upgrade" to make it look like their giving you a deal. In all likely hood their labeled as DDR3-1333 in their inventory, happens all the time.

Honestly somethings changed in HP, I remember them being this absolutely garbage OEM provided using dirt cheap ~everything~. Last year or so they've made great strides.
 
I whole heartedly agree with you that IPC matters. But if AMD can provide a chip for a decent price that can compete with Intel's i5, regardless of its larger die or supposedly eight cores, that's good enough for me. IPC is important but at the end of the day what matters is a good chip. I don't care how they get there, I just want an end product that's competitive.

And I think that bench you criticised shows that the fx chips can perform reasonably well for their price, even if overall inferior to a 2500k chip.


I agree this is important but i'm talking about long term its not a very smart business move on AMD's part to have a design that needs to be bigger as well as clock higher with a higher TDP to compete with Intel, And games are a very bad benchmark for CPU's and i don't really think i need to explain why.
 
Price vs performance vs energy usage is always what's mattered. The 8120fx is a really solid chip for it's price, more of a side grade for any 980 / 1090, but a decent upgrade from the lessor chips without requiring a new platform. I surely wouldn't recommend anyone building a new PC with an 8120 though.

Will wait to see if PD fix's some of the glaring issues with BD before recommending it to anyone.
 
Price vs performance vs energy usage is always what's mattered. The 8120fx is a really solid chip for it's price, more of a side grade for any 980 / 1090, but a decent upgrade from the lessor chips without requiring a new platform. I surely wouldn't recommend anyone building a new PC with an 8120 though.

Will wait to see if PD fix's some of the glaring issues with BD before recommending it to anyone.


Advise that makes since!
 
SCII is much more CPU intensive than HAX or STalker though. HAWX is a flight simulation but with very little in terms of AI and Stalker is a FPS, both will rely on the GPU before the CPU which is normal for the majority of games and wont show a major difference between archs until one of the CPUs becomes a bottleneck (in that case a new GPU being bottlenecked by one CPU wont show the same performance gains as a CPU that is not a bottleneck for the GPU).

SCII on the other hand has a lot of AI always working on a map and if you remember the Zerg rushes, it can get pretty intense. Its just like C&C, Age of Empires or any other RTS style game. They tend to have a massive amount of units, all able to be unique and a more powerfull CPU tends to benefit that aspect.
RTS's do tend to need a good cpu to handle some of the huge battles that come about, that's true. Noob's link to the S.T.A.L.K.E.R. benchmark showed that the one core is would scale across was at 100% use. That would be a cpu bottleneck correct? The same would apply to the Sandy bridge comparison, which was only ~10% faster in this case.


But instead use cores 0, 1, 2 and 3 which should be the first two modules and you will get a slow down compared to using cores 0, 2, 4 and 6 or 1, 3, 5 and 7.
When I said 0-3, I meant 0, 1, 2, and 3. That test (first two modules) scored the exact same as cores 1, 3, 5, and 7 (second core of every module). That shouldn't make sense, if the 20% CMT hit is really there. It's probably a case that Cinebench doesn't use enough of the cores resources to show a difference when modules share resources or not. Unlike games, the test isn't very dynamic.
 
I'm not at the level most people reading this stuff are, but to complain the FX has higher clocks isn't really fair. It's like saying comparing any two chips with different stock clocks is unfair, no? The FX comes at a lower stock clock, but I thought its turbo can push it to higher at stock than the 2500k. Perfectly fair I would have thought. I think we should be comparing what chips can do against each other, not what they can do against each other at a certain clock speed. The FX has twice as many cores, but people don't complain about that being unfair. Should be compared bang for buck I think.
This is the truth. When trinity releases, it will probably be up against Ivy i3's. Trinity models will be clocked much higher, but as long as Performance/watt and price/performance are better (or the same) then who cares?
 
I'll believe benchmarks once it's live and places like Toms get their chance to take a crack at it.

On another note, my new DV6 just arrived in the mail today (takes forever to go from the states to here).

A8-3550MX
6GB memory (note at the bottom) (soon to be 8GB DDR3-1600)
7690M (re-badged 6770M)
1920x1080 screen
extended 6-cell battery

I'm happy.

*Note*
After taking the back cover off I've confirmed my suspicion. The "free upgrade" to 6GB includes one stick of 4GB DDR3-1600 memory and one stick of 2GB DDR3-1333 memory. This is exactly what my GF's notebook came with when it arrived. I'm going to swap the memory around between them, she gets the 2x2 DDR3-1333, I get the 2x4 DDR3-1600. Woot for a free upgrade to DDR3-1600 memory.

When I get a chance I'll reload it and do some benchmarking to see the difference between A8-3530mx and A8-3550mx / dGPU / ACF.



Sweet !!

She will never notice.

:)
 
Is the Piledriver release closer than we think? Newegg has the 8150 down to 219.99 ( new AMD pricing). However, it is also adding a $15 promotional code. The "flagship" chip is starting to creep below the $200 level! Cleaning out inventory? I would think that we would see some tester leaks but perhaps with the poor PR from the Bulldozer AMD has kept the Piledriver close to the vest. Seriously, I hope it's a big improvement. Though I now own 2 SB 2500ks I'm dying for a comparable chip for my AMD Sabertooth 990FX. Hopefully the AMD 3+ chipset will not be overlooked.
 
I feel like you read only the second half of his post. His links to H.A.W.X. and S.T.A.L.K.E.R. showed ~10% difference. One scaled across 4 cores, another only 1. They both showed much less difference than the SCII benchmark did.

Because games like HAWX tend to be more GPU intensive then an RTS, which means you could simply be seeing a GPU bottleneck. Hence why all gaming tests should be performend at the lowest possible settings, to remove the GPU from messing the results.

Run both HAWX and STALKER at 640x480 with minimum settings, and you'd see the same exact graph you see for Starcraft 2. The fact the majority of games are GPU bottlenecked is frankly making BD look better then it otherwise would.
 
When I said 0-3, I meant 0, 1, 2, and 3. That test (first two modules) scored the exact same as cores 1, 3, 5, and 7 (second core of every module). That shouldn't make sense, if the 20% CMT hit is really there. It's probably a case that Cinebench doesn't use enough of the cores resources to show a difference when modules share resources or not. Unlike games, the test isn't very dynamic.

Unless you are facing a GPU bottleneck, which would be alarming to say the least. Only way to know for sure would be to reduce resolution/settings to minimum and run the same test again.
 
That's the only excuse anyone comes up with for an Intel optimized game. Oh its cpu intensive, amd just sucks. It has nothing to do with amd cpus beig fed total generic 386 code. Code can't possibly slow a computer down.

Prove the game was compiled with Intels compiler. There are apps out there that can check. THEN maybe you have a point. But until you can prove that, you are making a baseless assumption. Nothing more.
 
Prove the game was compiled with Intels compiler. There are apps out there that can check. THEN maybe you have a point. But until you can prove that, you are making a baseless assumption. Nothing more.

Looked at it quickly on my friends laptop and found "GenuineIntelAuthenticAMDCyrixInsteadCentaurHauls". Might take a look at it closer on my dev machine. Suppose I'm technically not allowed to edit it to force a code path if I find something of interest (e.g. lines of "if(isIntel() && hasSse())", would be interesting to see if it impacted fps much though...
 
^^ Based on that, looks like that you have optmizations for different CPU brands [CentaurHauls = VIA, CyrixInstead = Cyrix].

Easist solution, again, would be to use a program that figures out which compiler actually compiled the program.

I also note, the "Its the Intel Compilers fault" argument does NOTHING to explain why BD is oftentimes slower then Phenom [since, based on the argument, both would be running generic X86 code].
 
Status
Not open for further replies.