AMD FX: Energy Efficiency Compared To Eight Other CPUs

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

vitornob

Distinguished
Jun 15, 2008
988
1
19,060
[citation][nom]vitornob[/nom]Let me made some minor corrections.1 kWh (kilowatt per hour) costs 11 cents.Using multi-thread applications for 6 hours in FX-8150. To have the job done we would use:AMD Wh -> 264.0 Wh2600k Wh->131.4 Wh365 days/yr meansAmd kWh/yr -> 96,36 kWh -> Psu 80% Eff -> 120,45 kWh2600k kWh/yr -> 48,05 kWh -> Psu 80% Eff -> 60,06 kWhDifference btw, in about 60 kWh a yr -> $6.60Please correct me if I'm wrong in any step[/citation]

I did wrote wrong.

AMD Wh -> 960.0 Wh
2600k Wh-> 477.8 Wh
365 days/yr means
Amd kWh/yr -> 350.4 kWh -> Psu 80% Eff -> 438.0 kWh
2600k kWh/yr -> 174.4 kWh -> Psu 80% Eff -> 218.0 kWh

Difference btw, in about 220,0 kWh a yr -> $24.20/yr
 
G

Guest

Guest
Watching all these comments (not the review,, very informative) I think I know how the first guy to make a sub-compact car felt. Everyone knows that this was a new architecture, and at first none of the software was "optimized" for it..(perhaps a myth there?) But if we all give it a few months (6 or so) you might just be pleasantly surprised. As far as all the hype, I never paid much attention to it, most of us didn't and the REAL fanboys knew it wasn't going to be competitive in the first place. So just sit back, and watch if you will, because with all this new architecture laying around, it definitely will improve, just not at the pace that some would like.
 

silverblue

Distinguished
Jul 22, 2009
1,199
4
19,285
[citation][nom]amk-aka-phantom[/nom]AMD fanboys are so hilarious. Just ignore all benchmarks where Bulldozer loses... Core 2 Quad Extreme QX9770 was an extremely powerful CPU targeted at multimedia production; I can easily believe it beating Bulldozer in certain tasks. It is not an error.[/citation]
Maybe it's not, but it's a synthetic (and outdated) benchmark. If you wish to take that as gospel, be my guest. Is there any possibility at all that it just works far better on Intel CPUs?

[citation][nom]amk-aka-phantom[/nom]Ummm.... wtf do you mean "AMD purposefully binned better quality chips for the server market"? So you mean, the Bulldozers that will make it to the servers will SUDDENLY have much lower power draw? Yeah, right.[/citation]
I said "if". I proposed an idea, and you changed it to a statement. Regardless, we'll see how much juice they drink in the reviews. How do you know they didn't bin them? Power consumption is far more of an issue for them in the server market than in the enthusiast market.

[citation][nom]amk-aka-phantom[/nom]There's a separate Bulldozer server line which came before FX series. FX series was NOT meant to be a server chip. It was made for gaming and failed in that area. The only area where it might be useful is the workstation. BD scores top benchmarks in Photoshop - it will be great for a light Photoshop user (a heavy user who earns money with Photoshop will just get a Xeon). It also falls in between 2500/K and 2600/K in some other multimedia creation software, both price- and performance-wise. That's where these chips must logically end up.[/citation]
If Bulldozer wasn't meant as a server CPU, you might need to explain as to why "about 800 million transistors are dedicated to the memory controller, I/O interfaces and various other logic" (taken from http://news.softpedia.com/news/Ex-AMD-Engineer-Blames-Bulldozer-s-Low-Performance-on-Lack-of-Fine-Tuning-227816.shtml) - why would a desktop CPU need that? Why would a desktop CPU need 2 billion transistors, in general? It may not be targeted as a server CPU, but it sounds like it's carrying around all the tools to be one.
[citation][nom]amk-aka-phantom[/nom]Well, at least good to see that the fanboys have settled down with their "patches" and "fixes" that are allegedly supposed to set BD's performance straight...[/citation]
Bulldozer IS broken... we know. It's the second major architecture in a row that AMD have failed to design to work properly with the most modern OS of the time. The process is lacking and the design needs tweaking. Enough with the fanboy comments as well; the HH review plus the idea of an L1 patch gave plenty of people hope that something else was causing the issues, and then it all got trolled to hell thanks to that Quinetiam person.
 

silverblue

Distinguished
Jul 22, 2009
1,199
4
19,285
[citation][nom]Draven35[/nom]Looking at the Matlab results, the more floating point a text uses, the worse BD performs. Maybe AMD should have put in four FPU cores as well.[/citation]
Zambezi has four modules each containing one 256-bit FPU which can handle 2x128 or 1x256 bit instruction per clock. I may be right in that the implementation is comparatively immature compared to what they had before.
 
Get real Tom's. It's a fact that Windows 7 doesn't even recognize BD's architecture, which is why is does so bad in the benchmarks. Windows 7 doesn't know where to place the threads and they end up out of order, hence the bad benchmarks. It's a good processor, but today's software not only does not recognize it's architecture but can't utilize it efficiently. Why not show some benchmarks from the server side, then you will see it shine and put a hurtin' on sandy bridge zeons.
 
Get real Tom's. It's a fact that Windows 7 doesn't even recognize BD's architecture, which is why is does so bad in the benchmarks. Windows 7 doesn't know where to place the threads and they end up out of order, hence the bad benchmarks. It's a good processor, but today's software not only does not recognize it's architecture but can't utilize it efficiently. Why not show some benchmarks from the server side, then you will see it shine and put a hurtin' on sandy bridge zeons.
 

silverblue

Distinguished
Jul 22, 2009
1,199
4
19,285
Geek

AMD's fault though. They did the same with Phenom and Vista - Vista was throwing workloads at idle cores which took too long to ramp back up. Took them until Windows 7 to fix it. At least, if there's one thing Bulldozer has fixed, it's the delay in ramping up and down clocks.
 

amd has some cool(sounding) names for their stuff: phenom, bulldozer, trinity, valencia, interlagos, opteron, athlon, piledriver. despite how they perform irl, they're cool(sounding) names!
otoh intel's names are dorky(sounding): sandy bridge, ivy bridge, arrendale, clarkdale, silermont, pinetrail, prescott, lynnfield. despite how they perform irl, they're dorky(sounding) names!
i am not taking intel or amd's reasons for naming their products into account. ;D
 

pelov

Distinguished
Jan 6, 2011
423
0
18,810
[citation][nom]zodiacfml[/nom]AMD knew couldn't compete, thus, decided to create more cores and fit them inside a reasonable TDP and die size.[/citation]

What's worse is that even this isn't true. The chip at times resembles a 4 core 8 thread CPU rather than a straight 8 core chip. It's like hyperthreading, but only sometimes.

It's impossible to justify buying these things. They consume an atrociously large amount of power, perform incredibly poorly in single-threaded tests and even when multi-threaded apps tested the chip is either barely able to pull away or again finds itself in the middle. If this thing were half its current going price I wouldn't mind having one in a system, but that would be if it didn't consume so much power.

I'm not looking forward to trinity -- at all. Bulldozer cores on an APU will mean higher wattage and weaker performance. They really need to either shrink the deneb/thubans or work on the stars/husky architecture to prevent another astronomical failure; and if we're lucky maybe even introduce them into the desktop segment for enthusiasts as well. Maybe this time avoid any estimations on performance improvements from generation to generation. For example, who would have thought that when AMD said there would be a 50% performance increase from PhenomII>Bulldozer they meant decrease.
 

silverblue

Distinguished
Jul 22, 2009
1,199
4
19,285
People keep talking about 8-core Phenom IIIs or Llanos. An 8-core Stars/Husky would be interesting but I'm not sure it'd work.

Trinity will have Piledriver cores so there's always (some) hope that they'll fix it beforehand, or have done so already.
 

pelov

Distinguished
Jan 6, 2011
423
0
18,810
[citation][nom]zodiacfml[/nom]Tom's why not create a more demanding task like with multiple browsers, while in Photoshop, and transcoding?[/citation]

You're essentially arguing that they should focus more on testing the chips with a larger emphasis on multitasking, or, in other words, multiple applications that are well-threaded (or not) and use all 8 (or 4) cores when running. They did. The multithreaded programs are there to test just that and it still didn't perform all that well if you consider price, power and the numbers Intel or Phenom II put up.

You think running a test where blender is working while the bulldozer is asked to perform FPU tasks will mean it will pull ahead? Sorry to break it to you, but it'll fall even further behind the rest, and quite quickly at that.

[citation][nom]silverblue[/nom]People keep talking about 8-core Phenom IIIs or Llanos. An 8-core Stars/Husky would be interesting but I'm not sure it'd work.Trinity will have Piledriver cores so there's always (some) hope that they'll fix it beforehand, or have done so already.[/citation]

More cores doesn't necessarily mean better. Adding cores, depending on the architecture of the chip, can usually mean lengthening the pipeline in order to fit the cache, etc., which is essentially what happened with bulldozer. I don't think people are necessarily suggesting an 8 or even 6 core chip when they ask for a stars/husky improvement but rather a significant bump in IPC, lower power consumption and perhaps an AMD-constructed hyperthreading alternative. Sticking to bulldozer cores with minor improvements for piledriver just doesn't seem like a good idea. When you consider their "10-15 performance increase" forecast, it looks even worse.

Just dump it. Put it in the closet and let it collect dust. When more cores > IPC comes about and software finally comes around they can take it out, polish it up and they'll have a decent chip. But please, AMD, for the love of god, don't make small improvements or bump up the clock speed for piledriver. You'll only dig yourself a deeper hole
 

dgingeri

Distinguished
The curse of "FX"

first, a movie that had so much potential, but just plain sucked in the end.

Second, a video card series, also having much potential, but just sucking in the end.

Third, a cable TV channel. So much potential, but in the end, just another TV channel without anything more than anyone else.

Now, we have a processor that had so much potential, but in the end, just plain sucks. The previous version was just a more expensive version of the Opteron. I ended up buying the Opteron 185 for $200 less back in the day, and it overclocked better than most "FX" processors.

Is there anything that has "FX" in the name that doesn't suck? Are companies going to realize the curse and quit using that letter combo?
 

dgingeri

Distinguished
[citation][nom]silverblue[/nom]People keep talking about 8-core Phenom IIIs or Llanos. An 8-core Stars/Husky would be interesting but I'm not sure it'd work.Trinity will have Piledriver cores so there's always (some) hope that they'll fix it beforehand, or have done so already.[/citation]

They should have made a desktop version of this: http://www.newegg.com/Product/Product.aspx?Item=N82E16819105273

It works well in servers. I ought to know, I have 4 of them (Dell PE R715) for testing. they perform well and can push a lot of data to our backup systems. With the same 2 port FC HBAs or SAS controllers, they push data better than the Xeons in the same price range. They make excellent file servers. Just don't expect them to do well with any sort of FP work.

a Desktop version would sell, but it wouldn't be as good, or give AMD as much profit, as the Core i7-2600k. If they were to bring out the 8-core with 1MB L2 on each core along with an 8MB L3, they might be able to keep up with Intel's 2600k. The biggest design mistake they made was leaving the Phenom with only 512k L2 cache per core.

Well, in any case, the Bulldozer just plain sucks. The need to just scrap that design.
 

silverblue

Distinguished
Jul 22, 2009
1,199
4
19,285
I know, I know, it feels like I'm spamming this article, but I just thought I'd reply to this one...

[citation][nom]zodiacfml[/nom]Tom's why not create a more demanding task like with multiple browsers, while in Photoshop, and transcoding?[/citation]
Good point. When AMD compared Llano to SB, they did it in a multitasking environment, although that included the onboard GPU for both. It's different between SB and BD. Still, disable SB's onboard GPU, and test away.
 

shqtth

Distinguished
Sep 4, 2008
409
0
18,780
Amd needs to cut down the L2 cache size and get the latency down. Chances are that threads move around the cpu, so having lower cache latency would be best (for sync caches together). Also since threads are moving around the cpu so much, having large L2 cache is pointless. Maybe L2 cache should be shared in each module as that would help with threads canging cores and help cache misses? Larger L2 caches only help with an OS that is not stupid and keeps the thread tied to a core.

Some how the cpu seems to be stalling. Also all that extra cache is increasing the power usage, and in a way higher latency cache makes the cpu burn more power as it has to wait and energy is used for nothing. And since the OS is dumb, moving theads from core to core, L2 cahces has to keep getting duplicated from core to core, and increased larency and size means this process takes longer and/or L2 gets bypassed and L3 cache gets used more further slowing things down and lowering efficency and increasing latency.
 
G

Guest

Guest
AMD should have been honest and said we need to delay the launch until next year... the performance is not up to our standards. Sure, we would have been upset, but as a AMD fan, I'm not going to purchase the FX. I'll purchase a Llano (sp?) triple core 65w for my htpc, but that's it.
 
G

Guest

Guest
bulldozer is kind of disappoint in anyway, but i would like to see more "real life" situation, many SB K series user overclock their rig, same to PII black edition user, i really want to see a test with reasonable optimization setting (OC and play with voltage) and compare them. Like my x955, drop idle power to 0.8v@ 1.2Ghz which is lower overall idle voltage but higher performance then stock while OC to 1.4v @3.6Ghz under load.

If BD could handle 4.5 to 5 GHz easily on air, the benchmark on it might be very interesting.
 
Status
Not open for further replies.