AMD Piledriver rumours ... and expert conjecture

Page 77 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 
Of course the G620 is a 2-core/2-thread CPU and not oc'd either. Would be interesting to compare using an i3-2100 for about $40 more, since the latter has hyperthreading.

With Intel continuing to ratchet up performance I think they're going to have trouble even competing with Ivy i3 (3.4ghz - 55W) here shortly. Could you imagine if Intel actually released some of the CPUs they could but for marketing reasons don't. Like a G620 (2C/2T) but @ 4.5 Ghz.
 
Also, I again point out the complete news blackout that seems to be on PD. Do we even know what changes are being done to it yet, aside from the fact it will clock higher?

Other than the "clock mesh" announcement recently I don't think there have been any details yet.

I thought there was going to be a new FM2 socket for Trinity but that appears to have been scrapped for now.
 
From the front-page article comparing an APU to CPU + discrete, at the total component cost of $140: http://www.tomshardware.com/reviews/pentium-g620-amd-a8-3870k-radeon-hd-6670,3140-12.html


Of course the G620 is a 2-core/2-thread CPU and not oc'd either. Would be interesting to compare using an i3-2100 for about $40 more, since the latter has hyperthreading.

Not to be a party pooper, but I still haven't seen a low profile 6670 (not even Sapphire =/), so they might be a great combo (which is good), but they still don't fit in my lil' HTPC case XD

Also, the MoBo price... Did it have the same set of features as the Asus? I mean, I have that MoBo and it has a TON of things. It's one of the few integrated audio chipset with almost no annoying signal noise. I have it attached to a pair of studio monitors, so it's is very noticeable.

And yeah, the A8 heats like a stove, nothing to argue there 😛

hopefully AMD is aware of it before it has another Bulldozer fiasco..

another note:
AM3+ is the last, there is no planned AM4 at this time.
the one socket moving forward will be FM2..
bye FM1..

I really feel bad when I think about that, mal. I'm a FM1 and AM3+ owner, lol. Well, it's more like "owned" at the moment, hahaha.

Cheers!
 
hopefully AMD is aware of it before it has another Bulldozer fiasco..

another note:
AM3+ is the last, there is no planned AM4 at this time.
the one socket moving forward will be FM2..
bye FM1..

Is FM2 a reality though? It's been surprisingly quiet on this topic. For a 2H release the mobo companies would need to be bulk ordering these by now.

I thought FM2 was to allow a triple channel DDR3 interface which the APU could certainly use. AMD has announced they're sticking with AM3+ for Bulldozer/Piledriver through 2013 so maybe Trinity will stick with FM1 as well.

A new chipset has been leaked (A85X) but that's mainly to add native USB3.0.
 
This is the trouble with the new guys - they weren't here a year ago and thus have no memory of past AMD promises and subsequent failures to deliver, which is why many of us now favor Intel since Intel comes much closer to the mark usually.

Your post also highlights the problem with "older" guys that pretend to be neutral and unbiased. (BTW: a lot of these "unbiased" people favored Intel before Barcelona was released; so your statement that is quoted above is not truly accurate for many if not most of these people.)

What really happens is that if a new AMD chip comes out and it doesn't completely "trash" the competition but it merely competes within an acceptable percentage range based on price: then some people make their minds that it is a total failure and they will refuse to update their opinion regardless of any new data that becomes available.

Often the chip and/or software or drivers mature and then the chips competes even better. But since these people already decided it is a failure... they are too hoodwinked to notice and refuse to modify their outdated opinions. In fact to make things worse they'll go to great lengths and flame people on forums for pages based on old facts that may no longer be relevant and they will ignore newer data or pretend it is not relevant because it doesn't support the opinion they had already created.

And these people consider themselves to be "neutral" and "unbiased". It is actually kind of interesting that they truly do not see that their actions are the definition of being biased. (Actually what is kind of fun to watch is what things they will do to make sure that they can maintain the opinion they want to support at all costs.)
 
"2.9.1 L1 Instruction TLB Specifications
The AMD Family 15h processor contains a fully-associative L1 instruction TLB with 48 4-Kbyte page entries...

Models 10h–2fh have 64 entry data TLB and 15h has 32 entries."

http://support.amd.com/us/Processor_TechDocs/47414_15h_sw_opt_guide.pdf

"2.12 Load-Store Unit
For Models 00h-0Fh the load queue is 40 entries deep. For models 10h-2Fh the depth of the load queue is increased to 44 entries. "


Assuming Trinity is 2f this indicates a change from 48 to 64 entries for Trinity.

A small increase in the FPU queue for Trinity from 40 to 44.

Also there's a new latency table in section "B.4 FPU Instruction Latencies" which someone could go through and find the differences. There's 60+ pages so dig in.
 
Found 1 more on latencies.

"Table 15. Unit Bypass Latencies
Table 15 below describes the cycle penalties that occur when data passes from one type of pipemapped instruction to another type of pipe-mapped instruction."

From CVT to STO - changed from 1 to 0
From FMA-2c to STO - changed from 1 to 0
From FMA-5c to STO - changed from 1 to 0
From FMA-6c to STO - changed from 1 to 0
 
Not to be a party pooper, but I still haven't seen a low profile 6670 (not even Sapphire =/), so they might be a great combo (which is good), but they still don't fit in my lil' HTPC case XD

Also, the MoBo price... Did it have the same set of features as the Asus? I mean, I have that MoBo and it has a TON of things. It's one of the few integrated audio chipset with almost no annoying signal noise. I have it attached to a pair of studio monitors, so it's is very noticeable.

And yeah, the A8 heats like a stove, nothing to argue there 😛



I really feel bad when I think about that, mal. I'm a FM1 and AM3+ owner, lol. Well, it's more like "owned" at the moment, hahaha.

Cheers!

Cool then I should have bought one to replace the that I got that burns wood, live out in the country and is cheaper than propane or electric for heating.
 
Your post also highlights the problem with "older" guys that pretend to be neutral and unbiased. (BTW: a lot of these "unbiased" people favored Intel before Barcelona was released; so your statement that is quoted above is not truly accurate for many if not most of these people.)

What really happens is that if a new AMD chip comes out and it doesn't completely "trash" the competition but it merely competes within an acceptable percentage range based on price: then some people make their minds that it is a total failure and they will refuse to update their opinion regardless of any new data that becomes available.

Often the chip and/or software or drivers mature and then the chips competes even better. But since these people already decided it is a failure... they are too hoodwinked to notice and refuse to modify their outdated opinions. In fact to make things worse they'll go to great lengths and flame people on forums for pages based on old facts that may no longer be relevant and they will ignore newer data or pretend it is not relevant because it doesn't support the opinion they had already created.

And these people consider themselves to be "neutral" and "unbiased". It is actually kind of interesting that they truly do not see that their actions are the definition of being biased. (Actually what is kind of fun to watch is what things they will do to make sure that they can maintain the opinion they want to support at all costs.)

No, people trashed BD because outside of heavily multithreaded apps, it isn't any faster, and in many cases is SLOWER then the previous architecture. Meanwhile, AMD promised it would be significantly faster then it was. And killed its own competing architecture so everyone would have no choice but to move to BD.

Meanwhile, BD is priced the same as the i5-2500k, which beats it outright in the majority of benchmarks out there.

And no, the "it will improve as software gets more threaded" argument does not work, because as I have already noted, software simply does not and will not scale beyond a few cores.

Finally, everyone here can not recommend a processor on the hope it *might* be better a few years down the road.

What you are doing, rather then making an argument on why BD is better then everyone thinks it is, are instead attacking everyone thats been pointing out the obvious flaws in its design.
 
Toms benchmarks resulted in exactly what I said it would, favoring Intel whenever you can install a low to medium dGPU. The GPU and the Intel both had separate heat sinks and fans, thus they didn't share the same thermal headroom. The APU on the other hand must constrain both it's CPU and it's GPU operations within the same thermal envelope. With low to medium desktop dGPU' being cheap like they are, there is little reason not to have one. And thus APU on a regular desktop doesn't make sense. They APU may have more theoretical performance (four complete cores) but the Intel unit has better performance per core and a dedicated GPU which is superior to the APU's IGP.

Now that all being said, Toms picked the 3870 for a reason, it gave them the highest amount of money to add in a superior GPU to the Intel chip. Try that same setup again but instead use something a bit more realistic.

109.99 A6-3650
http://www.newegg.com/Product/Product.aspx?Item=N82E16819103943

79.99 A6-3500
http://www.newegg.com/Product/Product.aspx?Item=N82E16819103951

Now if we're talking HTPCs and other appliance devices, that's a slightly different story. Although still shouldn't use a 3870.
 
Toms benchmarks resulted in exactly what I said it would, favoring Intel whenever you can install a low to medium dGPU. The GPU and the Intel both had separate heat sinks and fans, thus they didn't share the same thermal headroom. The APU on the other hand must constrain both it's CPU and it's GPU operations within the same thermal envelope. With low to medium desktop dGPU' being cheap like they are, there is little reason not to have one. And thus APU on a regular desktop doesn't make sense. They APU may have more theoretical performance (four complete cores) but the Intel unit has better performance per core and a dedicated GPU which is superior to the APU's IGP.

Now that all being said, Toms picked the 3870 for a reason, it gave them the highest amount of money to add in a superior GPU to the Intel chip. Try that same setup again but instead use something a bit more realistic.

109.99 A6-3650
http://www.newegg.com/Product/Product.aspx?Item=N82E16819103943

79.99 A6-3500
http://www.newegg.com/Product/Product.aspx?Item=N82E16819103951

Now if we're talking HTPCs and other appliance devices, that's a slightly different story. Although still shouldn't use a 3870.
they could have put a althlon II x2 for $40 and a 6770 for $100 and trashed both the intel and APU setup in gaming performance. Also they could have chosen the i3 2120 + 6770 vs A8 + 6670 for about the same price. The more you spend on GPU, the more performance you get at the lower price segments. For some reason this wasn't obvious to people.

Its a nice article but its just a "well duh" moment when I read it. More gpu means better gaming performance especially the really small gaming library that Tom's uses.
 
With Intel continuing to ratchet up performance I think they're going to have trouble even competing with Ivy i3 (3.4ghz - 55W) here shortly. Could you imagine if Intel actually released some of the CPUs they could but for marketing reasons don't. Like a G620 (2C/2T) but @ 4.5 Ghz.

Actually what I was getting at was the use of HT to compare to 4 actual cores on the Llano, plus the i3 stock clocks are a bit higher. AT's benchmark utility doesn't have the A8 3870 but instead the 3850, so you'd have to scale according to stock frequency to compare.
 
Lots of stuff

You still haven't explained how you confused 8086 Integer / Logic instructions with SIMD vector instructions.

And your still demonstrating lack of knowledge in that the SIMD units don't process the kind of code that doesn't lend itself well to massive parallelism. The whole point of creating SIMD (vs MIMD) ISA's was to process thousands of instructions simultaneously. That is exactly how GPU's work, tens of thousands of operations.

Ex, a 1920x1080 screen is 2073600 pixels. Each pixel is compromised of three 8-bit color values and one alpha value for a total of 32-bits per pixel. To perform an operation such as making the screen brighter, the GPU must add a value to the three 8-bit color values. That ends up being 6,220,800 integer operations.

And even the lowest GPUs can perform this with ease, along with rendering everything else on the screen.

So tell us again, how do SIMD units not do handle parallel operations well?

What I find truly funny, is your ranting on integer / logic code. Add / Sub and CMP / JMP instructions, the types that tend to have large sets of serial dependencies. Yet you were attacking AMD's idea to implant a GPU's SIMD units, something that doesn't even process Add / Sub / CMP / JMP.

To implant this GPU they need to first decouple the SIMD unit from the rest of the processing core. And in doing so they decided that it would save die space to add another Integer Unit, after all their really small and do most of the heavy lifting for the CPU. While doing this they decided to use a shared L2 cache, and this is where I think they went wrong. Shared caches have higher latencies then dedicated caches due to coherency checks and locks on segments.

Anyhow the ultimate idea is to have four to sixteen integer units coupled with a GPU's SIMD array. Instead of two 128-bit Fused FPU's, you have 24+ 128-bit Fused FPUs. These fused FPU's can be used for regular SIMD instructions (AVX / FMAC / SSE / XOR) or they can be used for rendering textures and processing video data, or any combination thereof.

Actually ... I've already answered all your questions in various posts. And the last one it the funniest. Assuming BD was finished in early 2011 and heading to production. Then they've had 10~12 additional months to continue to work on it. With PD going to final silicon soon, that will signal when this phase of their design cycle is finished. I said it wasn't implausible for them to have worked out the cache latency issues and branch prediction issues during that time. Whether they have or have not is something no body here actually knows and we'll have to wait to see. My prediction is some small improvements but nothing major.

Your comparison of BD to a P4 is very telling of your knowledge of uArch and ISA. Back foul field, buzzwords will avail you not.
 
For people talking about sockets, please realize that Llano was basically "Phenom III". A die shrunk 32nm Phenom die with a larger L2 cache. AMD decided to remove the L3 and put on a GPU, probably because they intended to use the BD as their "high end" part and Llano as the "low end" part, which makes their pricing options look insanely weird.

I would really be interested if they would of made a Llano without the GPU but with some L3 and two more cores (Similar to a 32nm Thuban). Ohh well guess we'll never find out.
 
AMD said a lot of things about BD performance, few of them which turned out to be true. I'm doing exactly what I did with BD: Looking at the design decisions made, and projecting performance SOLELY based on that.

Will PD have an improved IPC? Probably, due to small optimizations. Anything major? Not really. So when AMD says "IPC has improved", without knowing HOW and WHY, I take a very pessimistic view on how good those IPC improvements will be.
well you just simply don't know enough about the design to claim the cache can't be fixed. You also don't have any idea what IPC improvements are being worked on so being pessimistic doesn't offer anything. This all boils down to you not knowing anything about what you are talking about yet making very large claims.

Everyone knows Bulldozer didn't perform well. People just think they know why when they have no idea. They seem to think they know more about the chip than the engineers who designed it.

At your post about multicore scaling of software. Software will support more cores, we are no where near the point where most algorithms lose much efficiency with more cores. When we get to 128 cores, you might have a point. More cores allow more things to be done simultaneously. When we get to the point of so many cores that efficiency fails, instead of rendering 1 thing, we can do 2 things at once. with half the cores on 1 thing and half on another.

say you want to run a small server, play a game, transcode a HD video in real time and stream it and have some small applications in the background at the same time, you can do all these things on different cores. Single core performance will only get so far. CPUs will only become more multicored and software will move as it can and is need. This might be an extreme case of core use but given how the PC is already, most people don't need any increase in single threaded CPU performance or more cores. More cores can be used, its just a matter of time before people use them. Why have 2 pcs when they can all be done with 1 cpu given the software and other hardware support?

Just remember, bill gates once said we would never use more than 640KB of ram, look where that got us? Software is always evolving just as hardware. It may not be the way you like to program but it will eventually be used somehow.
 
No, people trashed BD because outside of heavily multithreaded apps, it isn't any faster, and in many cases is SLOWER then the previous architecture. Meanwhile, AMD promised it would be significantly faster then it was. And killed its own competing architecture so everyone would have no choice but to move to BD.

Meanwhile, BD is priced the same as the i5-2500k, which beats it outright in the majority of benchmarks out there.

And no, the "it will improve as software gets more threaded" argument does not work, because as I have already noted, software simply does not and will not scale beyond a few cores.

Finally, everyone here can not recommend a processor on the hope it *might* be better a few years down the road.

What you are doing, rather then making an argument on why BD is better then everyone thinks it is, are instead attacking everyone thats been pointing out the obvious flaws in its design.

and in many cases is SLOWER then the previous architecture ---- A few cases. The fact that it was slower in some was completely blown out of porportion. And technially speaking since your looking at its design, its mostly a 4-core (module) cpu with some extra parts. Of course the x6 has potential of being faster, it has 6x of every part, not just 4 or 8.

AMD_FX-8150-201.jpg


software simply does not and will not scale beyond a few cores. --- many will disagree with this statement. Times change, and if you want software to change, you have to fundamentally change it. Going back and forth editing existing code is not a fundamental change, thats putting a bandaid on the problem. Software will catch up eventually, hardware has ALWAYS come first, after all, you can't program for whats not available, it would be impossible to test. When that time comes, Intel will follow AMD with more cores, but for now they are letting AMD be the guinnea pigs.

Yes, there are flaws in BD design, but just like any design, it takes time to work out the bugs. Considering it takes 3 months to re-spin a product, constantly changing it in hopes of one to be perfect for release, you have to compromise somewhere and just release it.

But this is often not talked about. BD is not 100% flawed, it very much has potential even in today's systems and is merely a building block for BD-E (codename: steamroller & excavator)
 
Your post also highlights the problem with "older" guys that pretend to be neutral and unbiased. (BTW: a lot of these "unbiased" people favored Intel before Barcelona was released; so your statement that is quoted above is not truly accurate for many if not most of these people.)

What really happens is that if a new AMD chip comes out and it doesn't completely "trash" the competition but it merely competes within an acceptable percentage range based on price: then some people make their minds that it is a total failure and they will refuse to update their opinion regardless of any new data that becomes available.

I don't know of anybody posting in this thread who ever said BD was a "total failure". Disappointment, underwhelming, mediocre etc yes, after what JF-AMD and especially Baron Matrix said about it, raising expectations which far exceeded what AMD delivered. There are several posters here that went out and bought AM3+ boards to get ready for BD, and are now stuck with using P2's in them as BD wasn't competing "within an acceptable percentage range" based on what AMD was charging for it. Thats the whole problem right there - overpromise and underdeliver, a sure recipe for driving your customers away. And I haven't seen any new data on BD to counteract that. The 2500K is still cheaper than the 8150 and performs better in the large majority of benchmarks and games: http://www.anandtech.com/bench/Product/434?vs=288

Often the chip and/or software or drivers mature and then the chips competes even better. But since these people already decided it is a failure... they are too hoodwinked to notice and refuse to modify their outdated opinions. In fact to make things worse they'll go to great lengths and flame people on forums for pages based on old facts that may no longer be relevant and they will ignore newer data or pretend it is not relevant because it doesn't support the opinion they had already created.

OK, there have been several BD update articles here on THG, including one comparing its performance with the Win7 patches. Helped maybe 1-2% IIRC. Just how long do you expect us to wait for the software and/or drivers to mature for it? At this rate, Intel will be several generations beyond Haswell by the time BD gets to what the hype said it would be at release.

And these people consider themselves to be "neutral" and "unbiased". It is actually kind of interesting that they truly do not see that their actions are the definition of being biased. (Actually what is kind of fun to watch is what things they will do to make sure that they can maintain the opinion they want to support at all costs.)

You mean like your "nonbias" as demonstrated by your AMDZone insults to Don Woligroski (aka "Cleeve") who wrote the THG humor article that poked fun at BD as one of the top 16 "epic failures"?? 😛.
 
When we talk about software scaling along with more cores we need to first discuss the nature of the instructions being used. Some types of code are naturally more serialized then others.

Take a bubble sort method, something that's been around since the dawn of IC's. Bubble sorts can not be parallelized very well due to each operation depending on the results of the previous operation. A CPU having 1 core and a CPU having 1024 cores will do the bubble sort in the exact time assuming all other conditions are equal. The CPU with 1024 cores has three orders of magnitude more power then the CPU with 1 core yet that capability isn't really expressed.

Most of the code processed by today's programs is written to handle one thing at a time. Their doing all their collision detection, physics, and graphics loading in sequence. It's a giant loop that checks a bunch of things in serial and signals a sub-process to handle something while the main loop waits for the return to continue onward. To get out of this requires the coders to think radically differently and start to program each component to run separately from everything else. This raise's some big sync issues that need resolved.

Ultimately computer code is just the virtual expression of the coders who create it. Most humans tend to think on a single train of thought at once, thinking about something else requires that they pause their first train and task-switch inside their brain. Before we see truly serial code we need to first teach ourselves how to think on multiple trains at once.
 
and in many cases is SLOWER then the previous architecture ---- A few cases. The fact that it was slower in some was completely blown out of porportion. And technially speaking since your looking at its design, its mostly a 4-core (module) cpu with some extra parts. Of course the x6 has potential of being faster, it has 6x of every part, not just 4 or 8.

Lets not forget those CMT cores are still 80% or so efficent, so you can't just remove them from discussion. And doing a quick eyeball of the image you posted, you have a performance decrease in about 40% of the tested apps, despite having more execution resources AND a higher clock speed.

software simply does not and will not scale beyond a few cores. --- many will disagree with this statement. Times change, and if you want software to change, you have to fundamentally change it. Going back and forth editing existing code is not a fundamental change, thats putting a bandaid on the problem. Software will catch up eventually,

No, it won't. We do things in a linear nature: we add two numbers, then we do something based on the result. Etc. As a whole, you will find yourself limited by how much of a program you can make parallel, and due to the overhead of working with multiple threads, you will find that amount to be a very limited part of the program.

The minute you bring locks, mutexes, and sephamores into the discussion, you are implementing bottlenecks into the system that limit any scalability you might have had. And don't even get me started on IO bottlenecks. And nevermind the OS running in the background that may decide to run another thread instead for one reason or another...

Granted, there are some areas you can parallize well; most iteration loops are good targets of oppertunity. But as far as program structure goes, there is a limit to how much using extra cores will give you, simply because we do things in sequence. Hence why the applications that do scale well typically operate on very large datasets [IE: Imaging, Rasterization, Encoding], and those are already being rapidly moved to the GPU, which is designed around working with such datasets.
 
I don't know of anybody posting in this thread who ever said BD was a "total failure". Disappointment, underwhelming, mediocre etc yes, after what JF-AMD and especially Baron Matrix said about it, raising expectations which far exceeded what AMD delivered. There are several posters here that went out and bought AM3+ boards to get ready for BD, and are now stuck with using P2's in them as BD wasn't competing "within an acceptable percentage range" based on what AMD was charging for it. Thats the whole problem right there - overpromise and underdeliver, a sure recipe for driving your customers away. And I haven't seen any new data on BD to counteract that. The 2500K is still cheaper than the 8150 and performs better in the large majority of benchmarks and games: http://www.anandtech.com/bench/Product/434?vs=288

Often the chip and/or software or drivers mature and then the chips competes even better. But since these people already decided it is a failure... they are too hoodwinked to notice and refuse to modify their outdated opinions. In fact to make things worse they'll go to great lengths and flame people on forums for pages based on old facts that may no longer be relevant and they will ignore newer data or pretend it is not relevant because it doesn't support the opinion they had already created.

OK, there have been several BD update articles here on THG, including one comparing its performance with the Win7 patches. Helped maybe 1-2% IIRC. Just how long do you expect us to wait for the software and/or drivers to mature for it? At this rate, Intel will be several generations beyond Haswell by the time BD gets to what the hype said it would be at release.

And these people consider themselves to be "neutral" and "unbiased". It is actually kind of interesting that they truly do not see that their actions are the definition of being biased. (Actually what is kind of fun to watch is what things they will do to make sure that they can maintain the opinion they want to support at all costs.)

You mean like your "nonbias" as demonstrated by your AMDZone insults to Don Woligroski (aka "Cleeve") who wrote the THG humor article that poked fun at BD as one of the top 16 "epic failures"?? 😛.

I am one of those posters that bought a 990FX board last September. I was hopeful that one day there would be a good upgrade over the P2 x4 that I got but that day might never come. BD is junk and wastes more power than any thing that I got. Even GT200A 65nm only tops out at 240w max while the FX 8150 just sails past that mark when overclocked with all 4 modules enabled :s
Not even intel's largest monolithic cpu Itanium 2 93xx with it's titanic 698.75mm2 die only uses 185w.
 
they could have put a althlon II x2 for $40 and a 6770 for $100 and trashed both the intel and APU setup in gaming performance.

Got any links? Don't have any video card specified, but here's the CPU comparison of the cheapest Athlon II X2 vs the G620: http://www.anandtech.com/bench/Product/114?vs=406

Also they could have chosen the i3 2120 + 6770 vs A8 + 6670 for about the same price.

Which A8 - the one reviewed? If so the price would have been $140 + $100 = $240. The i3 is about $40 more than the G620 IIRC, for a total price of $180. The whole point of the article was to compare identically priced components to see which combo had the most bang for the buck.

The more you spend on GPU, the more performance you get at the lower price segments. For some reason this wasn't obvious to people.

Up until a weak CPU bottlenecks the GPU. But otherwise everybody here probably has agreed that a strong GPU is preferable over a weaker one, from birth if not sooner 😛..
 
I don't know of anybody posting in this thread who ever said BD was a "total failure". Disappointment, underwhelming, mediocre etc

Thanks for making me laugh... I needed to spit coffee out all over the place. Let me paraphrase your sentence: "Nobody here specifically said it was a failure even though it is a failure".


There are several posters here that went out and bought AM3+ boards to get ready for BD, and are now stuck with using P2's in them as BD wasn't competing "within an acceptable percentage range" based on what AMD was charging for it.

If those people didn't already have a PhII then it would be logical for them to buy a BD if they had already planned on doing so that since the difference in performance isn't as bad as people keep pretending.


OK, there have been several BD update articles here on THG, including one comparing its performance with the Win7 patches. Helped maybe 1-2% IIRC.

Perhaps using the same old tired benchmarks might not be the wisest thing in the world to do? I could see someone like you in another few years saying: "Stop looking at those new applications that utilize all the cores... nobody cares about them. Look at how it runs Crysis and Cinebench R10.... Intel totally pwns AMD. Woot. Woot."


You mean like your "nonbias" as demonstrated by your AMDZone insults to Don Woligroski (aka "Cleeve") who wrote the THG humor article that poked fun at BD as one of the top 16 "epic failures"?? 😛.

You do realize that I have not claimed to be non-biased for years? (If I ever made that claim.)

But my god that guy you mentioned is such a troll. To paraphrase him: "I've written a completely biased and immature article and presented it on what is supposed to be a professional site and ignored a bunch of people that pointed out things that were questionalbe with it by using the lame excuse that "we have to agree to disagree" to dismiss their concerns as being trivial. But I am still totally neutral and unbiased towards AMD since I like their Video cards and owned an AMD CPU awhile back. So now I've come here to gloat about how immature I can actually be." His need to troll about his awful and unprofessional attitude is actually a total and complete demonstration on how to be "EPIC FAIL".
 
I am one of those posters that bought a 990FX board last September. I was hopeful that one day there would be a good upgrade over the P2 x4 that I got but that day might never come. BD is junk and wastes more power than any thing that I got. Even GT200A 65nm only tops out at 240w max while the FX 8150 just sails past that mark when overclocked with all 4 modules enabled :s
Not even intel's largest monolithic cpu Itanium 2 93xx with it's titanic 698.75mm2 die only uses 185w.

OK, well you just killed my argument that nobody here calls BD "junk" or a failure 😛.. :kaola:

Seriously though, while it may not be suitable for your needs, it does seem to sell out on Newegg. Of course, that could be Malmental buying them for his 990FX board 😀..

I've seen several other posters who got ambushed by the BD hype and have publicly regretted buying an AM3+ board when they had a perfectly good AM2 or AM3 board already. Mostly gamers I think.

And I don't really blame JF-AMD for the misinformation either - according to his last post ever on Anandtech's forums, he was told by the AMD engineers that IPC would be higher than on Phenom 2. I guess the same engineers who miscounted the number of transistors by 600 million 😀..
 
Status
Not open for further replies.