AMD's Piledriver And K10 CPU Architectures Face Off

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

shark974

Honorable
Jul 18, 2013
5
0
10,510
0



Actually the stock 6350 (3.9ghz) beat the overclocked 4.0 ghz 965 easily in these tests. So, clock for clock, the FX was faster.

You were probably looking at the downrange chips which isn't really fair. 965 BE wasn't a downrange chip in it's day...

Chasing mhz is proven to be a bad thing, however I think AMD did well with PD as this article shows. Basically they did what they could to fix broken Bulldozer, and made some decent lemonade out of lemons. But it was all they could do, new architectures take years.

Steamroller will be the real test.

If they can get any decent IPC improvements at all with Steamroller, with the clocks as high as they have and as many cores as they have they should be sitting pretty, coupled with Intel's relative lack of improvement with Haswell.
 

shark974

Honorable
Jul 18, 2013
5
0
10,510
0


Depends what you mean by "soundly". One might think stock 6350 is uncomfortably close to stock 3570. The difference seems to be about 25%, so again I guess it depends on your definition.

6300 or 8320 are only better options if you overclock, which seems to be taken as a given here but I for example dont like to do it.

At stock it's kind of a tossup between a 6350 and a 8320. 6350 will give you better single threaded performance due to higher clocks, 8320 will do better in software that can use many cores. But it's not always clear cut, stock clocks are 400 mhz higher on 6350, put that over 6 cores and it's 2.4 extra ghz, vs 2 additional coresx3.5=7ghz on 8320. But that assumes the software will take full advantage of the extra cores and it probably wont. Also pricing, 8320=160, 145 on sale, 6350=140, 130 on sale.
 

ingtar33

Illustrious



many of those tests use multiple cores, so of course a 6 core will beat up a 4 core. Look at the tests where the fx4350 was matching the fx6350... those are the tests which don't use many cores. The reality is the IPC of the 965 is pretty close to identical to the ipc of piledriver. That said the poster you were responding to was wrong. Phenom IIs as a whole are not faster core for core then piledriver. However, there are LATE model PhIIs which not only are as fast but in some cases are faster then piledriver. the 1100T for example is clearly faster then piledriver (as are many of the Thuban cored PhIIs)

The general point is this. If i had a Thuban, and could overclock it up into the 4.0-4.5ghz range i would see no point to upgrading to piledriver. Unless i wanted more cores there is no advantage to it. Now, if steamroller can deliver on a 20-30% ipc improvement, there will be some attraction to a Thuban or even an older PhII owner to upgrade. (speaking as a 965be owner, and having played around with some of the fastest piledrivers i've read about, one hitting a day to day of 5.6ghz i still don't feel the need to update to an FX, which is definitely AMD's biggest problem. It's hard enough to feel the need to upgrade to an i5/i7 when sitting on a 965be, if the performance jump isn't enough for one of those, why would i go with a piledriver?)
 

cmi86

Distinguished
Sep 29, 2010
2,145
0
20,160
123


I don't think we are reaching a ceiling on technology in general, i think the "core i" architecture has reached it's ceiling. 1st gen was ok, 2nd gen was amazing, 3rd gen was ok but marginal IPC improvements, reduced OC headroom and increased heat made it kind of meh, 4th gen shows basically zero IPC yield, even more heat/power consumption and an even lower OC headroom makes this chip basically worthless from an enthusiast standpoint. Intel is beating a dead horse with this architecture and I would expect something entirely new in the coming generations. As AMD is maturing the modular design Intel will be starting from scratch. What happens during this period in time is to be seen.
 

ronch79

Distinguished
Jan 16, 2010
181
0
18,680
0
Typing this on my FX-8350 + HD7770. I don't have a Core i5 to compare against it but I'm pretty happy with it, especially when all cores are kicking in. For usual tasks such as browsing the Internet, it's about as snappy as any PC I've used. Gaming is ok as well, depending on which title you're playing. It's very smooth playing Battlefield 3. Most games play very smoothly. The worst games in terms of fluidity play ok as well. Oblivion comes to mind. It only uses a couple of cores but I still get 75fps. Vsync is on so I don't know if fps numbers will go up when I turn it off. Older titles such as System Shock 2 play flawlessly even when using only one core.
 

ronch79

Distinguished
Jan 16, 2010
181
0
18,680
0
Many folks are saying that the FX is slower clock-for-clock vs. K10 cores, but for me, it doesn't matter. What matters is the delivered performance on the specific application I'm using. For example, under Handbrake, I don't care if the FX needs twice as many cores to beat a Core i7-3770K (and also running at a higher clock) as long as I get finished results under the same amount of time. Of course one can argue that the FX chips consume more power and it indeed is a concern, but how often do we push our systems anyway? It's not really such a big deal on your power bill unless you're running a bunch of these things.
 

jdwii

Splendid
6350 is 26% slower compared to the I5 in gaming yet the I5 cost 55% more, Amd still leads the price/performance crown i guess also i'm actually amazed that generally a 4350 is better than a 965 in gaming would've never guessed.
 


That is simply wrong





 

logainofhades

Titan
Moderator

Tell that to THG then.





 

jdwii

Splendid


I'm sure the price difference was 50%+ more a phenom x 3 cost 70-80$ a I5 cost 2.5x more.
 

vertexx

Honorable
Apr 2, 2013
747
1
11,060
30


Too funny.

The 213 & 187 numbers are relative to the Intel Core 2 Duo E8400, so it is not correct to say that the 6350 is 26% slower than the I5. To get the % performance relative to the I5, you would take (213-187)/213 = 12%. So, according to this chart, the 6350 is only 12% slower than the I5 in gaming.
 

vertexx

Honorable
Apr 2, 2013
747
1
11,060
30
This article also shows that the Gaming CPU Hierarchy is messed up because that article only represents results for 3 games. That's a gaming ranking - this is an all-around assessment. But the results here paint a different picture than the Gaming CPU Hierarchy.
 


Ding! Thank you, Mr. V.

The thing is, even that **12%** is highly skewed by the results in 2 games (F1 2012 & Skyrim), where the FX6350 (before over-clocking) scores 83- and 63-FPS respectively, at highest detail. Not too shabby.

It's the "THG" website, and their right to control their content and presentation. But 'that chart' above could (and did) lead to incredibly false assumptions. THG could have prevented that by simply putting the results side-by-side, but for whatever reason chose not to do so.

The fact is, there is seldom a perceptible difference in frame rates in the overwhelming majority of games. What differences exist can easily be wiped out with small tweaks in settings/detail, and for the most part the bottom line remains this:

It's your video card that ultimately drives your highest frame rates ....

:)


 

Traciatim

Distinguished
Nov 11, 2006
2,057
0
20,460
310


But your CPU is generally the problem in low FPS scenarios and your minimums, where you notice problems. Something that AMD does miserably at. Where CPU actually matters it falls flat on it's face.
 

vertexx

Honorable
Apr 2, 2013
747
1
11,060
30

I'm not sure I agree with that, at least based on these two articles. If you bring up this one and the Intel one and match up game-for-game, the AMD CPUs hang with or even surpass the Intel chip in the low FPS scenarios. It's the higher FPS scenarios that Intel really pulls ahead.
 

Traciatim

Distinguished
Nov 11, 2006
2,057
0
20,460
310




Except they don't really have frame times and CPU and GPU usage listed, so you can't really infer what you are from the data we have. What you can tell is that the average frame rates on the Intel side averages about 15% faster on the stock 3570k over the stock 6350, but the minimum frame rates average about 30% faster. Since they use the same video card something else is causing the extreme slow downs when things get bad. Since most performance in games is either your video card or your CPU, the only thing you can tell is that when CPU matters, AMD fails.

What I would really like to know though, is what exactly changes between Crysis 3 on Medium settings and on High settings that causes an interesting discrepancy. On Medium the 6350 is actually a good deal faster than the 3570k (25% average, 17% min) but as soon as you switch it to High the 3570k pulls ahead. So what exactly is the difference between medium and high that Intel is great at but AMD fails . . . it would be an interesting question to answer and extremely helpful for users of the 6350 (or similar) chips to know that they could enable all the high detail settings except for X, and Y and still match or beat the 3570k.

The same kind of thing happens in F1, and that game seems to scale well with memory performance so maybe it's better memory management? On High the 3570k is about 10% faster on both average and min, but switching up to Ultra it changes to 25% and 30% respectively.

Hitman doesn't show the issue at all and the spread is about the same on high and ultra.

Farcry 3 shows the exact opposite where the 6350 really closes in when you go from high to Ultra. In FC3 the 3650k is about 35% faster for the minimum frame rate on high, but then only about 8% faster when switched to ultra.

Tomb raider shows a really strange result where on the Mountain test on High the 3570k averages faster by about 17% and the minimum is better by about 34% . . . yet on Ultra the 3570k only averages about 3% faster yet the minimum spread gets tilted even more in Intels favour where the 3570k is 43% better. So what changes here so that the 6350 doesn't have as much impact overall by switching to ultra but in the scene that causes the minimum to be recorded at gets hammered even worse.

Interesting questions indeed.
 

ingtar33

Illustrious


well reasoned, however i think it was proven a long time ago, the problem with AMD at the moment and the reason they're not competitive with intel is their memory controller the hyper transport and north-bridge is a joke. The big jump from the core2 duos and the core i series was the breakthrough with the memory controller. AMD still hasn't been able to match that.

Throw in the scheduling issues with the bulldozer architecture and you have 90% of the difference between the core i cpus and the FX cpus.
 

vertexx

Honorable
Apr 2, 2013
747
1
11,060
30

Just to put things in perspective though, we're talking about FX 6350 vs. I5 3570K, when the price of the FX 6350 is the same as an I3 (actually the 6300 is, but that's probably the better buy anyway). So the fact that we're even having the discussion comparing the 3570 to the 6350 is I think good for AMD. This article sparked a lot of positive discussion favorable to the AMD CPUs. Sure, they don't match the Intel high end, but for the price of an I3, you can approach the performance of the I5, or even match it for some titles.

So if I'm in the price range for the 3570K, I'm probably going to buy that. But if I'm not, I'm certainly feeling a lot better about spending $130 on a FX-6300 as a result of these tests.

I concluded in an above post, that my choice in gaming CPU's as a result of this article and discussion will be the 750k on the low end (actually the Richland 760K is just now available for $5 more), the 6300 for mid-range (except for ITX builds, in which it would be the I-3), then the I-5. So that's 2 out of 3 for AMD - not bad.
 

Tremec

Distinguished
Apr 14, 2006
51
0
18,660
7
While I often consider performance per watt on my personal systems - many gamers looking to build a system could care less, Tom's has always put more consideration on efficiency than most gamers and this skews Tom's recommendations.
Tom's often forgets that most gamers have a set budget and want the most performance they can get for a certain $ amount and energy consumption is the last thing on their mind.
I compare processors in this way:
Raw performance - Stock / Raw performance - Overclocked
Performance per watt - Stock / Performance per watt - Overclocked
Price vs performance - Stock / Price vs performance - Overclocked

If it takes an $80 cooler to overclock a $120 processor to match the performance a $200 stock processor your better off going with the $200 processor as it is going to last longer.


 
Status
Not open for further replies.

ASK THE COMMUNITY

TRENDING THREADS