AMD's FX-8150 After Two Windows 7 Hotfixes And UEFI Updates

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
I don't think anyone, especially those in the Intel camp, are claiming that AMD's latest FX series are not "good enough" because I think they are. Is AMD advertising these FX chips for workstations or do they recommend a seperate part for them like the servers? I feel the "good enough" stamp, as I define it in my mind, is specific to the typical retail customer, hobbyists, and amateur audio/video enthusiasts.

Tom's, the next time you visit MMO performance testing on SSDs, cpu's, or xfire vs sli type comparisons, please DO test the FX-8150 or better at that moment with Star Wars: The Old Republic and paired with current and next gen single gpu flagship models from nVidia and AMD, specifically the 580, 6970, 7970, and whatever else is tops at the time of testing. I won't trust any other results in regards to sw:tor and the bulldozer parts. :)
 
The marketing and engineering of AMD's FX/Bulldozer line totally screwed the pooch. Perhaps that is why so many left just around the release of this POS CPU... While we were NOT expecting Bulldozer to out-do intel, we wanted it to be competitive.... ie: 0~10% slower at 25~50% cost savings.

But instead, we got a CPU that was 10~50% slower that costs $50 more! (Newegg, as of 1/26/2012)
- A CPU that drinks more power
- A CPU that is bigger
- A CPU that is a "fake" 8core CPU...
- A CPU that runs hotter

In a few months, Intel launches another CPU that is going to be easily faster than Sandybridge... I sense a fall of prices of Bulldozer.

Seriously, AMD might have gotten more out of Stars with a dye-shrink than this Netburst clone CPU they pooped out. Didn't AMD learn something from Intel's mistake with the P4?! Hint hint, longer pipelines = performance loss?!

Of course, even the $190 i5-2400 is easily faster than the fastest AMD... so why bother, when you can save $80? Who cares if you can OC the FX CPU by another 1~2GHz which may put it close to an i5 in stock performance? Big deal, OC the i5 and it'll take a single step beyond the AMD.

Most of my system builds and my personal PCs are AMD... my next one will NOT be an AMD.

Now, AMD could have saved face and not come out looking like the clowns that THEY ARE.
A) The FX-8150 should be labeled and SOLD as a 4-core CPU. It LOOKS BAD when your "8 core" $270 CPU get's its butt kicked compared to a $200 4-core CPU from the competition.

B) The FX-8150 should be sold at $180 at the most, that would put it on par with intel's $180 CPU in performance. Still not exciting, but it would be competitive... but in a few months, Intel's future $150 maybe twice as powerful as AMD $280 flagship for all we know.

Sorry AMD... I know intel screw you and everyone else with their back-room deals, but you should have done better than this. *I* would not have done Netburst II, Attack of the Dumb.
 
Belardo, are you trying to say that Bulldozer is still 10% slower than Intel even in the various productivity benchmarks that it won?

Or, are you trying to say that the aggregate score of 14% slower is discernibly slower to end users than your target 10%, or even discernable to 0%? Or do you just enjoy ignoring facts and figures and making your own up?
 
Advanced Micro devices has some serious power here (pun intended,And not ) I'm actually on my old Phenom 2 940 black system as we speak. Good system but would prefer an Intel I3 system for power consumption/performance per watt. Having a efficient performing cool running rig is a must these days. Even with pricing really low on "Amd" systems, I find it difficult going back to them at the moment. My good sized Lan that I'm running is getting expensive with electricity being high. I just recently replaced one of my old "Amd" rig (Opteron 185) with a newer Intel core series setup. Upgraded old Video 8800 gts 320 (Zotac 550 ti) ,And replaced the old CRT monitor for Led 1080p. Saving around 10 dollars a month with the more efficient setup.(Just this one system!?) Performance is important and "Amd" has enough, But for me /cool running/cycle calculations a second/workload speed also matter. It's early for "Amd" They have been feeling pushed and hurried to bring up something good. In time/future these problems from "Amd" will be a thing of the past. I would build one if a family or friend needed a good cheap performing system. (But Not Me!, I know better) I'm here for the long haul, And the value/Savings over its life of service running in my Lan. But I do hope for the best for "Amd".
 
Dollar for Dollar, most folks are better off with an I5-2500K.

That being said, AMD is on the right track in the sense that BD is a different product, different target market, and different approach than Sandy Bridge.

You will never out intel intel.... they just simply have too many resources, fab process dominance, R & D money and a stranglehold on the PC ecosphere.

You can, if you are clever, find niches and cracks where you can beat intel however.

The Sandforce SSD controllers are a great example... if it's a race to produce the fastest NAND, hey, everybody else is toast... but the Sandforce controllers are a different take fundamental approach to the problem.

I give AMD at least part more for a good first effort at NOT taking Intel on head on...
 
I too would like to see some more realistic multi-tasking benchmarks. Admittedly it's probably EXTREMELY difficult to develop such a test but this IS Tom's Hardware after all.

I'd like to see tests with multiple applications at once and see how the two companies architectures compare. How about ripping a Blu-Ray, encoding some H.264 video, compressing some huge folder with WinRAR, and maybe some office apps running as well. Heck, I did exactly those things this afternoon. Like many other enthusiasts that come to Tom's, that is how we usually use our computers. Very rarely do I ever have only 1 or 2 running apps at once. Any time I do only need a few apps running, I open up BOINC and crunch data units for SETI@Home and CERN. (True, I like using up all the CPU/GPU time I can.)

Comparisons could be made between how long it takes each task to complete. How fast each additional application starts up when x amount are already running. What kind of lag there is when alt-tabbing, or even the dreaded "Windows" + Tab cycler. Also, leave an Anti-Virus running real-time protection in the background, leave some music playing or a video playing on a secondary monitor and see if they stutter at all while the computer is performing all kinds of other tasks.

These are the types of tests I'd love to see Tom's start doing. I know it won't be a perfect test the first try but you guys are constantly improving your methods.
 
A few more tasks to add to the multi-tasking tests: Windows Media Center or some other DVR software recording 1 or more HD streams in the background while also streaming previously recorded content to other machines on the network. This is actually not very demanding on the CPU since any half decent tuner does the encoding itself but it's realistic which is the point.

I love AMD and Intel and have built countless machines during the past 20 years using both their CPU's. For the past few years though, it's been harder to justify the money for AMD CPU's given the huge advantage Intel has been holding over their heads. Since the garbage Prescott Pentium 4's went away and we got the first conroe's and Core 2's, Intel has been in the lead and hasn't showed any sign of letting up. If AMD can perform nearly as fast for the same price point and offer the kind of multi-tasking performance I've alluded too, I'd be more than happy to build a few AMD boxes for my next few builds.
 
[citation][nom]eddieroolz[/nom]Disappointing to see pretty much no changes; but at the same time, Windows 7 was fundamentally not architected in this way, so the real hope lies with Windows 8.[/citation]

The REAL problem here is:
"Wait! Bulldozer you come! It will eat and destroy everything"
"Ok, it's not that good, but better drivers/corrections will come! And NOW it will destroy"
"Well.. Win 7 isn't supposed to work don't?! Win 8 is the salvation"
Prediction:
"Well, Win 8 is not that better.. but piledriver will be an AWESOME CPU! Remember the Phenom I vs Phenom II days? Bulldozer vs Piledriver coming!!!"

And there we go.. go.. go..

As per AMD own slide, Piledriver should have 10% more performance than bulldozer, that increment don't even make it a better buy than Sandy-bridge, and surely won't make it a better buy than the incoming Ivy-bridge.

AMD NEEDS to get it right, to shot in the center of the target even inside a dark room. Cause I'm not feeling nicely towards the future... we need competition..
 
[citation][nom]alidan[/nom]there is a bare minimum cost, which is 50k per wafer, try doing the math once, amd and intel don't pull in serious profit from their chips, intel pulls more in from the high end chips, but not as much as you think, amd sells the bad chips with locked cores, no not waste any wafer if possible.[/citation]
According to the information I've seen the cost per 28nm wafer is more in line to $4,000~$5,000 and I cannot imagine 32nm costing 10X more at $50,000. The yield affects the cost and obviously supply and demand. I'd like to see where you got "$50,000"; a link would be nice.

Here's an example of TSMC 28nm production -> http://www.xbitlabs.com/news/other/display/20110912192619_TSMC_Reportedly_Hikes_Pricing_on_28nm_Wafers_Due_to_Increased_Demand.html

Maybe you saw some thread, but those costs aren't typically available to the public. My best 'guess' is the Sandy Bridge wafer cost is about $3,000~$4,000.
 
Piledriver not same socket killed Bulldozer for me, I could take iffy performance if there was light at the end of the tunnel.

Combined with overcharging in the first place, AMD has put itself in the corner, where the kinda of people who want what AMD can deliver (professionals, not enthusiasts) are all well managed/contracted by intel.

So, even buying a 'cheap' fx processor and overclocking it, you won't touch sandy bridge 2 cores.

Would be nice for a serious competitor to enter this domain, but the desktop market has had it really for someone new to enter it.
 



I second the motion to do these sorts of multithreading tasks. My common computer usage is having a (mostly idle) Linux webserver/misc tasks VM running in Virtualbox, streaming Netflix, various browser tabs (30+ sometimes) viewing who knows what (adverts using flash or the like), encoding video (gotta dump my dvd collection to an ISO store for the WD TV Live on the TV) so I usually have DVDFab or DVDShrink chunking away, all while having some window-mode video game up (WoW, BF3, etc). If I wanted to quickly (7)zip and send some files to a friend, would that cause my Netflix to stutter? I upgraded the wife's computer from an X2 6000+ to an X4 955 BE just for such a reason: her Netflix would lag/freeze up while playing games because her CPU red-lined. The 4 cores of the 955BE was enough to work, but my Q6600 is eating it hard under full load, even OCed to 3GHz (going conservative stable). Sure, an upgrade to Ivy Bridge in a couple months will bring things back in line, but how well compared to an FX-8xxx chip?

Oh, and Crashman, you should (like I suggested before, but you seem to have missed it) test the FX-series on 2166MHz RAM. Other sites have proven that the FX chips are highly dependant on RAM speed, but they don't do near as well in testing (only post one or two charts with the faster RAM, since it wasn't their "objective"). Test it and put it to rest, for better or worse.
 
After reading all of your comments and of course the content presented I realized two things: Intel fans are hard to argue about, even with non-partisan benchmarks, almost like Apple fans and many of them go for rogue computational power instead of balanced qualities of a home PC like AMD fans tend to appreciate.I've also noticed something that no one else seems to realize, that the only tests in this review that AMD processors win were of non US software developers like 7zip from Russia and ABBYY Fine Reader from Ukraine, companies out of Intel's reach for now, and this aspect of benchmarking here at Tom's makes me wonder of the fairness of this site.
 


Intel makes the most sense for those looking for long-term power (and thus overall cost) savings at given performance points. They overclock easier. Intel has a higher performance point that their CPUs can reach over AMD. I use Intel in my personal rig. However, I have 2 other AMD systems in my home. These provide enough CPU power, albeit at a higher wattage requirement than an Intel CPU, but the mobo was cheaper than the PSU (and I got a good, but cheapo, Antec Earthwatts) and the CPU was just as cheap. Made a great HTPC due to the price-point being a small step above devices such as WD Live TV or Boxee, but giving full benefits of an HDMI-connected Win7 PC using a "retired" 9600 GSO vid card, so not only can I watch Netflix and network-shared ISOs (which only the WD Live device could do the latter), but I could spin up WoW or Skyrim if I so choose.

As for performance and Intel not being able to "reach" some companies, 7-zip is Open Source. The others are quite de-facto benchmark suites or meant to be real-work uses of the system. Who wants to watch Furmark loop? I prefer to see how LAME encoding performs, since that is something I use. I just know an Intel chip will crunch it faster than an AMD, especially when you can OC a 2500K to 4.4GHz rather easily....
 
- “More and more evil laughs are ringing in my ears, in the meantime I've heard about 8 cores, 4 more laughs at once.”

If you like antique engines, and those researching aspects of old engine designs, them Bulldozer.
 
Why don't AMD instead of making the 2 execution units in each core separate, if they find a way to take the 2 execution units and make them sub execution units with in an execution unit. Like the execution unit allows both sub execution units to work as one execution unit or two separate execution units. That way you get really good single thread performance and still have good multi thread performance.
 
The core question seems to come down the FPU, or at least this is a lot of reasoning behind saying it's not a so called true core.
Let me ask a simple question if this holds true.....
How many cores were in a Intel 286 or 386 or the 486sx cpus?

By the confused logic and the reasons given by a lot of posters is none, those brands didn't have any cores. Why? because by today's reasoning a complete cpu core has to have it's own FPU. Do you follow me?
A core has nothing to do with it having a FPU at all. It just got added after the 486DX line of cpu's when they took the math coprocessor and built it into the chip instead of like prev. computers you had to buy the FPU or math coprocessor seperate and plug it into the slot on a motherboard. Back in those days it wasn't really needed expect for CAD users and some other math oriented processing.
So leave the FPU out of it. I am hoping AMD has a future plan to discontinue the FPU from the cpu and bind it with the video cards GPU as they can do the work a lot better than the built in FPU's in todays cpus. That is just a maybe but they are binding the two together and may have found out that it would be better to go that route and just put token FPU on the cpu's for overhead problems if or when the video part was too busy to handle all the FPU demands.
It's all just theory but I'd like to see how it would work out.
But their new cpu's def. have 8 cores with FPU's being shared. This has really nothing to do about making them cores or non cores. Because the FPU doesn't define a core. All it does is makes peoples brain melt trying to come to grips with this new type of design. Why is it such a big deal anyway? With lots of folks making the proverbial mountain our of the molehill in epic fashion... Give it a rest. It's also the same as Intel's combining 2 cores that are not independent like AMD's and so are Intels 4 cores really only 2 ? Yes, they are in a sense..
Could be by taking this one step further. AMD was the only one to actually make a legit. 2,4, and 6 core chips ect.. ect..
And who really cares as long as they work. The only trouble so far with AMD is more software related than hardware and when it starts to be ran as AMD designed it to it would def. run a bit faster than it can today.
Any cpu is only as good as the software is made to work with it. This goes for every cpu and their company that makes them.
I mean come on folks, it's like computers 101, that software is built to use what the cpu has to offer. The same goes for video cards too which is also first grade level basics. It's just a sad part that everyone is bashing AMD without taking this basic info into account. It will take software time to come optimized for this radical new design to make it work at it's best and it's almost a crime that people ignore this simple fact about all this.
It would be nice if AMD had a lot more programmers support including Microsucks because they can give it at least another 10% performance boost (it got 5 to 10% increases with windows 8 beta testing) so the final story on it's performance is still not out yet to properly judge it but like all of our ignorance AMD is guilty before all the data is in. We all love to jump the gun on things like this, myself included but I try to give them the benefit of my doubts because I know the final story isn't finished with it by any means.
 
[citation][nom]billcat479[/nom]The core question seems to come down the FPU, or at least this is a lot of reasoning behind saying it's not a so called true core. Let me ask a simple question if this holds true..... How many cores were in a Intel 286 or 386 or the 486sx cpus?[/citation]It's not just the FPU:http://www.tomshardware.com/reviews/fx-8150-zambezi-bulldozer-990fx,3043-4.html

Putting that aside, would anyone really want a super-fast 386?
 
*sigh*

I had an FX-8120, got it for $209. It was great for the price, overclocked to FX-8150 speeds. But now I have an i7 2600k @ 4.5ghz. I could see why there was disappointment with Bulldozer, mostly with power consumption and lightly threaded apps. But honestly, I think once AMD starts fine tuning the architecture it could be competitive in the value segment. But at this point, Intel has taken the "fastest if you can afford it" crown and ran far away in which I don't see AMD catching up. Unless somehow they pull a magic trick and Bulldozer II actually meets expectations. Who knows. Oh and to stay on topic, I had the patches before I got rid of it and really didn't feel any difference. However, I did notice that games or other apps were actually using the main cores of the modules.
 
lets try the easy way if some one can rewrite a linux kernal to see if AMD clam stand solid and we can get some thing done on par with latest i7. 8 cores come on all the fuss about why they needed socket am3+ to add more features and still a 1100T is batter and cheaper option.
As an AMD fan i see smoke but no cigar no SSE4.1 seems like they stack 8 intel atom processors lol bad move AMD
 
[citation][nom]billcat479[/nom]The core question seems to come down the FPU, or at least this is a lot of reasoning behind saying it's not a so called true core. Let me ask a simple question if this holds true..... How many cores were in a Intel 286 or 386 or the 486sx cpus? By the confused logic and the reasons given by a lot of posters is none, those brands didn't have any cores. Why? because by today's reasoning a complete cpu core has to have it's own FPU. Do you follow me? A core has nothing to do with it having a FPU at all. It just got added after the 486DX line of cpu's when they took the math coprocessor and built it into the chip instead of like prev. computers you had to buy the FPU or math coprocessor seperate and plug it into the slot on a motherboard. Back in those days it wasn't really needed expect for CAD users and some other math oriented processing. [/citation]

And then 3d gaming game out... and even non-3d games started using the FPU.


[citation][nom]billcat479[/nom]So leave the FPU out of it. I am hoping AMD has a future plan to discontinue the FPU from the cpu and bind it with the video cards GPU as they can do the work a lot better than the built in FPU's in todays cpus. That is just a maybe but they are binding the two together and may have found out that it would be better to go that route and just put token FPU on the cpu's for overhead problems if or when the video part was too busy to handle all the FPU demands. It's all just theory but I'd like to see how it would work out. [/citation]

And then you get 2 FPS in Quake V (for example) because your GPU is getting weighed down with all the other calculations it is not expected to do.
 


You just contradicted yourself
 


You don't understand. He implied he would build a bulldozer based system for his relatives or friends yet he then he says "(But Not Me!, I know better)".
 
Status
Not open for further replies.