AMD FX-8350 Review: Does Piledriver Fix Bulldozer's Flaws?

Page 7 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]LORD_ORION[/nom]One thing I would like added is Virtualization benchmarks.I'm certain many of Tom's readers, like me, have their own little private network at home. Does an 8350 make more sense over Intel for running some Virtual Box machines on your desktop? I have 4 machines in my home lab... clearly I can get rid of them now. But which way to go?[/citation]Tomshardware talks almost too much about the cloud as the magic bullet. Anyways...

With AMD, I feel that for lower-end CPUs, they may be better because most AMD CPUs will have hardware virtualization. For the mid-to-high-end, it mostly depends on the guest application because VM programs pass X86 instructions over. The biggest key is to run lots of RAM

AMD will tend to have the advantage for integer loads and Intel handles SIMD loads. I'm not sure if VM programs support OpenCL yet. x86 to x86 tends to be quite efficient.
 
[citation][nom]murambi[/nom]Benchmarking winrar is fine by me but benchmaking autodesk with a 200 dollar processor simply just doesnt make much sense[/citation]

There is some merit to that statement. Companies with HPs running Flame typically have dual-socket XEON systems with
12GB or more RAM and one of the higher Quadros (5800 or better), eg. the Z800 with 2x X5680 @ 3.33GHz (12 cores
& 24 threads total). For 8K work these units get stuffed with 96GB+ RAM. The CPUs alone cost well over $1500 each.

However, I suppose any benchmark is still an extra data point to ponder.

Ian.

 
Not sure why but my earlier reply to blazorthon had the name of a professional application blanked out; it was Pro Engineer
I was referring to as an app that responds well to a high clock rate even on a dual-core.

Ian.

 
Quite a difference a year makes for AMD. At least now, there are grounds to recommend the FX series for builds in North America. I hope that in the next year, AMD can make further 15% gains to stave off Haswell.
 
[citation][nom]m1n3kraft[/nom]Would a 43xx be enough to support a 650 TI[/citation]
more than enough, I would suggest getting a 7850 over a 650 ti tho, much more performance for very little increase in price.
 
I have to say, I think my next build will be AMD. I have a 2500k right now, and I really think that everything will get much, much more paralell as the years go on. Considering AMD's dire straights, and their node differences, the fact that Intel's R/D budget is about the entire AMD budget, this cpu looks pretty damn good!
 
Bad timing for AMD, the CPU sales look sluggish with no real reason to upgrade these days. The Core i5 is still more than anyone needs for most applications and games. There needs to be a compelling reason to upgrade and with GPU's worth buying staying in the $500 range this keeps most of us content still using single cards and playing games at lower resolutions. Also no new applications or games that make me want to run out and upgrade! My Christmas list this year will be a tablet and possibly one for the kiddos.
 
looks likle the fx-6300 is one of the best values under $150 aside from the already awesome A10-5800k.
so blaze tell me what do you think between fx-4300 vs fx-6300 vs A10-5800k which cpu must one get between these three ? its mostly between fx-6300 vs A10-5800k ... which one and why ?
 
How about students with the free edition of Autodesk Maya? Also, just because you buy an expensive house, doesn't mean you have to fill it with expensive furniture. If you're software was already expensive, then why "add insult to injury" and spend on more expensive hardware when you can get adequate performance with cheaper hardware (hypothetically)? Chris's (or TH's) choice in hardware and software still carries relevance and sense I would say. :)

"...Won't help you..." sounds a little too absolute considering that running multiple (single-threaded) programs or several instances of one would be able to leverage multiple cores/threads. Though, this is not always the case, it's just proof against the absoluteness of that guy's statment. :)

I think even mobile programs (commonly called "apps") are also getting multi-threaded to be able to run better on multi-core mobile CPU's (commonly ARM). One I remember was Glowball which was demonstrated on the Tegra 3 (quad-core).

(Not to say that that person was in fact trolling, but...) Making a troll look like an idiot. Was it worth it? HELL YEAH!!! :lol:

 
The 8 cores on offer look better for running a VMware server/ workstation than the 4 cores in the intel i5 for the same cost. I want to run several vmware virtual machines at the same time and so multi-threading does it for me.
 
If they want a cpu intensive game why not runescape? Its multiplayer and uses the cpu over the gpu. And its a free to play game so no costs to benchmark it. Although i'm glad to see that the 8350 has turned out pretty decent i can hardly wait to see how the other new cpus stack up.
 
I'm not sure if it is really multi-threaded, or if MMO's are necessarily multi-threaded just because they are MMO's. Take WoW for instance (I don't think it takes advantage of that many threads, but I'm not totally sure.).

BTW, am quite an RS fan here... Got hooked into its world back then... I may return as a member someday. Just saying. 😛
 
Too much power consumption for a midrange cpu 🙁
good overall performance/$
bad gaming performance
=> Not the best choice for a average desktop user

I have great hopes for server parts AMD.. don't fail us amd :-s
 
[citation][nom]ojas[/nom]As a supplement, Anand's article:http://www.anandtech.com/show/6396 [...] 300-tested[/citation]
I don`t like that Anand still uses CS4 as benchmark ... CS6 clearly shows that it benefits more and more from multicore CPUs
 
[citation][nom]fuzznarf[/nom]run these tests again 3570k vs FX-8350 with a 7970 GHz edition card.[/citation]
Why would people down vote this?
For one, the 7970GHz edition is the faster than the 680. and 2nd, AMD specifically markets the Radeon and PileDriver advantages such as their 'Quickstream', 'App Acceleration', and 'Accelerated Video Converter' technology. Do these technologies really make a difference when both cpu and gpu are AMD, or is it just marketing?
 
[citation][nom]fuzznarf[/nom]Why would people down vote this? For one, the 7970GHz edition is the faster than the 680. and 2nd, AMD specifically markets the Radeon and PileDriver advantages such as their 'Quickstream', 'App Acceleration', and 'Accelerated Video Converter' technology. Do these technologies really make a difference when both cpu and gpu are AMD, or is it just marketing?[/citation]

I don't think that most people realized what you meant by your earlier post.
 
I will admit that I am a noob for saying this, but I still cant see why they are sticking with their psuedo-octocore architecture... I wished they just improved on the PhenomIIs X6 and have real hexcore cpus. That might close the gap with Intel
 
[citation][nom]dyc4ha[/nom]I will admit that I am a noob for saying this, but I still cant see why they are sticking with their psuedo-octocore architecture... I wished they just improved on the PhenomIIs X6 and have real hexcore cpus. That might close the gap with Intel[/citation]
Everybody here is a noob when it comes to this , but maybe the old Phenom architecture was the best as it could get, the only logical step from here was to make another architecture which slowly seems to go in the right direction even though it eats more power, thing is that a company so small with so few $ in their pockets can make such CPUs/GPUs i just sit and wonder what would AMD really be if it had Intel`s R&D founds.
 
[citation][nom]dyc4ha[/nom]I will admit that I am a noob for saying this, but I still cant see why they are sticking with their psuedo-octocore architecture... I wished they just improved on the PhenomIIs X6 and have real hexcore cpus. That might close the gap with Intel[/citation] BD was a necessary step for AMD to take. IMO It is still an innovative/revolutionary design despite its flaws AKA high power consumption and high cache latency (and resource sharing issues). If only they could have more time to develop/fix these flaws before the release. PD delivers but still consumes much power compared to Intel chips. Late and still power hungry BUT still not a bad CPU imo
 
Not that anyone will see this correction on page 9, or even care but I just had a look at my electric bill, and being danish as the example in the article is, the KWh price is far above $0.40/KWh. That's 2008 prices. Today it's a little more than $0.49/KWh. Or 23% above the price stated in the article.

I also have to stress that $0.49/KWh is the absolute minimum price you can pay for electricity here in Denmark, and includes a KWh discount on price because of special agreements with my workplace and several electricity suppliers.
 
[citation][nom]dyc4ha[/nom]I will admit that I am a noob for saying this, but I still cant see why they are sticking with their psuedo-octocore architecture... I wished they just improved on the PhenomIIs X6 and have real hexcore cpus. That might close the gap with Intel[/citation]

I've already explained several times some of the reasons for that and so have at least one or two other members in this article's comments section.

Phenom II's micro-architecture, 10h (Stars), was outdated (it was an adaptation of the Athlon 64 CPUs from 2003). It didn't have support for many modern instructions and in tasks that use at least some of them, it could lag behind greatly (AVX and some other floating point instructions are a great example of this where Phenom II might be up to four times slower than it would be if it supported them, maybe even more in some cases). The modular architecture is superior in many ways, but AMD has made several ridiculous mistakes along the way. Bulldozer, although arguably more of a proof-of-concept than a fail, was an incredibly poor implementation of almost every reasonably possible aspect of the CPU. I could go on and on about what was wrong with it, but the modular concept was not one of the problems.

Hardware/configuration-wise, Bulldozer's greatest issues are its design methods (auto-design tools simply aren't as good as transistor-by-transistor designs from expert engineers), huge cache latency (especially on the L3, it might be so high that the L3 cache doesn't actually help performance), insuficient x86 decoding functionality per module, desktop models being configured as server-oriented CPUs, Windows not being optimized to deal with that poor configuration properly, and many other flaws in the designs such as soft-ended flip-flops, crap branch prediction, and much more. I find it a little impressive that Bulldozer did as well as it did despite the huge problems that it has.

For example, if you simply disable one core per module to alleviate the x86 decoder deficiency (there isn't enough for two cores, but there is enough for one core) would increase performance of the remaining cores significantly while dropping power consumption significantly, a huge bonus to lightly threaded power efficiency. Doing this with the FX-81xx models turns them into a quad module, quad core CPU instead of a quad module, eight core CPU and makes it more consumer/desktop-oriented because it puts a greater focus on lightly threaded performance and lightly threaded power efficiency than on highly threaded. Doing this to the FX-81xx CPUs gives you a quad-core CPU that at stock is a little faster than a Phenom II x4 CPU of the same clock frequency while being far more power-efficient and having much more overclocking headroom.

Instead of outright disabling you can cut-down the P States of each core and prioritize them properly with software such as PS Check and K10 Stat so that they have an intermediate of highly threaded and lightly threaded performance focus, kinda like the Phenom II x6s compared to the FX-41xx CPU and the FX-81xx CPUs. That'd been a good successor to Phenom II x6.
 
[citation][nom]saturnus[/nom]Not that anyone will see this correction on page 9, or even care but I just had a look at my electric bill, and being danish as the example in the article is, the KWh price is far above $0.40/KWh. That's 2008 prices. Today it's a little more than $0.49/KWh. Or 23% above the price stated in the article.I also have to stress that $0.49/KWh is the absolute minimum price you can pay for electricity here in Denmark, and includes a KWh discount on price because of special agreements with my workplace and several electricity suppliers.[/citation]

I have to correct myself. It's not the cheapest price actually. It's the cheapest price if you have elected to be supplied only by renewable energy source. A so-called 100% green bill.
 

Sorry for being late to answer you.

I was actually trying to compare this case to a basic example. Without any undervolting or anything like that in hand. What I'm trying to say is that most people care more to performance than power consumption (of course for the card at the same price). Maybe my example is wrong because I rush it up, but you get the point.

That chart was based on AMD's roadmap. There is no evidence yet that AMD is going to make a large improvement from Piledriver to Steamroller. All I can tell is from this picture/slide.

zambezi-slide-10.jpg


Yes it was a pretty nice run from AMD in the OC-ing section. Yes it could match i5/i7 single threaded performance (I've seen on techpowerup they got 8.4x in cinebench overclocked, so that's about the same as i5/i7 single threaded). But we are not talking about when the i5/i7 is overclocked. It will lead back (OCd vs OCd), even with whatever the temperature issues is (with IVB, I mean with the max OC you can get with it).

So It would be really nice if AMD released a Quad core (Dual Module) with a good single threaded performance, I really hope that they can make that in the next release (and of course I hope that picture/slide is wrong :)).
 
Status
Not open for further replies.