AMD Piledriver rumours ... and expert conjecture

Page 78 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 
Got any links? Don't have any video card specified, but here's the CPU comparison of the cheapest Athlon II X2 vs the G620: http://www.anandtech.com/bench/Product/114?vs=406

Also they could have chosen the i3 2120 + 6770 vs A8 + 6670 for about the same price.

Which A8 - the one reviewed? If so the price would have been $140 + $100 = $240. The i3 is about $40 more than the G620 IIRC, for a total price of $180. The whole point of the article was to compare identically priced components to see which combo had the most bang for the buck.

The more you spend on GPU, the more performance you get at the lower price segments. For some reason this wasn't obvious to people.

Up until a weak CPU bottlenecks the GPU. But otherwise everybody here probably has agreed that a strong GPU is preferable over a weaker one, from birth if not sooner 😛..
how did you spend $100 on a 6670 when toms bought it for $70? the i3 2120 is $130 and the 6770 is $100= 230 total. The A8 is $140 and the 6670 is $70 making 210 total.

http://3dmark.com/search?resultTypeId=232&linkedDisplayAdapters=1&cpuModelId=1315&chipsetId=688

http://3dmark.com/search?resultTypeId=232&linkedDisplayAdapters=1&cpuModelId=1090&chipsetId=570
 
No, people trashed BD because outside of heavily multithreaded apps, it isn't any faster, and in many cases is SLOWER then the previous architecture. Meanwhile, AMD promised it would be significantly faster then it was. And killed its own competing architecture so everyone would have no choice but to move to BD.

Meanwhile, BD is priced the same as the i5-2500k, which beats it outright in the majority of benchmarks out there.

And no, the "it will improve as software gets more threaded" argument does not work, because as I have already noted, software simply does not and will not scale beyond a few cores.

Finally, everyone here can not recommend a processor on the hope it *might* be better a few years down the road.

What you are doing, rather then making an argument on why BD is better then everyone thinks it is, are instead attacking everyone thats been pointing out the obvious flaws in its design.

It might be slightly slower on some things than the previous architecture, but it is not as slow as some of the "unbiased" people keep attempting to pretend.

It competes just fine with the i5-2500k. Somebody that has to pretend that it doesn't compete because they only look at specific applications and games is rather pathetic.

You are correct when you say that the "it will improve as software gets more threaded argument won't mean anything" because many of the "unbiased" people around the internet will never allow newer benchmarks or applications be presented. Actually that is true NOW. All you have to do is watch if somebody presents a new benchmark. It doesn't matter if the benchmark is from an obscure site or if it is from a major site; it will be dismissed as being non-relevant if it doesn't support the general online opinion.

And I am not attacking people that points out the obvious... I am questioning their definition of what they consider obvious.
 
Thanks for making me laugh... I needed to spit coffee out all over the place. Let me paraphrase your sentence: "Nobody here specifically said it was a failure even though it is a failure".

Hopefully decaf - at your age, you probably shouldn't be ingesting so much caffeine 😛.. Anyway, in my dictionary, "failure" means "1. lack of success: a lack of success in or at something 2. something less than that required: something that falls short of what is required or expected." While I think BD does fall well short of what was "expected", those expectations were hyped up. Is BD adequate for gaming? Yes. Is BD adequate for rendering/transcoding/compression/productivity stuff/yadda-yadda? Yes. Does it better than or provide more bang for the buck than the competition? Mostly No.

If those people didn't already have a PhII then it would be logical for them to buy a BD if they had already planned on doing so that since the difference in performance isn't as bad as people keep pretending.

So you're saying it is logical for them to spend $$ on upgrading to an AM3+ mobo for "isn't as bad as people keep pretending" performance difference?? Hmm, wonder why AMD didn't think of that for their marketing slides 😛..

Perhaps using the same old tired benchmarks might not be the wisest thing in the world to do? I could see someone like you in another few years saying: "Stop looking at those new applications that utilize all the cores... nobody cares about them. Look at how it runs Crysis and Cinebench R10.... Intel totally pwns AMD. Woot. Woot."

Keith, I've seen the same-old, same-old complaints from you about these "tired old benchmarks". Howzabout posting some new ones then, and I don't mean some Linux server benchmarks since most of us here are gamers & ordinary DT users.

You do realize that I have not claimed to be non-biased for years? (If I ever made that claim.)

OK, my mistake - sorry. But then that means you're the pot calling the kettle black, right??

But my god that guy you mentioned is such a troll. To paraphrase him: "I've written a completely biased and immature article and presented it on what is supposed to be a professional site and ignored a bunch of people that pointed out things that were questionalbe with it by using the lame excuse that "we have to agree to disagree" to dismiss their concerns as being trivial. But I am still totally neutral and unbiased towards AMD since I like their Video cards and owned an AMD CPU awhile back. So now I've come here to gloat about how immature I can actually be." His need to troll about his awful and unprofessional attitude is actually a total and complete demonstration on how to be "EPIC FAIL".

While I admit I didn't find the article outrageously funny, I did see it as a humor article, not to be taken seriously. IIRC Cleeve said the same thing while he was on AMDZone. In fact I think that's the reason why he went there to post in the first place - to dispell any notions that it was a serious article.

Also IIRC he put Intel's Itanium up on that list (which had more Intel stuff on it than any other company's), and yet up until recently Itanium was generating $4BN in revenue each year - i.e., about the same as all of AMD. I didn't see anybody getting their shorts in a twist over that..
 
and in many cases is SLOWER then the previous architecture ---- A few cases. The fact that it was slower in some was completely blown out of porportion. And technially speaking since your looking at its design, its mostly a 4-core (module) cpu with some extra parts. Of course the x6 has potential of being faster, it has 6x of every part, not just 4 or 8.

Lets not forget those CMT cores are still 80% or so efficent, so you can't just remove them from discussion. And doing a quick eyeball of the image you posted, you have a performance decrease in about 40% of the tested apps, despite having more execution resources AND a higher clock speed.
so wich is it, your looking at the desing aspect only or the marketing performance only (ie calling it an 8 core cpu). Or does the situation change only to fit your arguement?

I'm doing exactly what I did with BD: Looking at the design decisions made, and projecting performance SOLELY based on that.

btw, the PII 980 has the higher clock speed and has a gaming advantage of 1.4% with a clock advantage of 1%, so in games did bd lose 0.4% performance? absolutely disasterous, lets blow it out of porportion and call that 0.4% 40% instead. or better yet, it loses 40% of the apps to previous gen [strike]by 0.4%[/strike]. We aren't going to mention the 0.4% because it sounds better that way.
 
so wich is it, your looking at the desing aspect only or the marketing performance only (ie calling it an 8 core cpu). Or does the situation change only to fit your arguement?

I'm doing exactly what I did with BD: Looking at the design decisions made, and projecting performance SOLELY based on that.

btw, the PII 980 has the higher clock speed and has a gaming advantage of 1.4% with a clock advantage of 1%, so in games did bd lose 0.4% performance? absolutely disasterous, lets blow it out of porportion and call that 0.4% 40% instead. or better yet, it loses 40% of the apps to previous gen [strike]by 0.4%[/strike]. We aren't going to mention the 0.4% because it sounds better that way.

It's typical of haters. Twisting data one way or the other to fit their preconceived idea.

Considering AMD used automated tools to design BD rather then design it's components by hand (what Intel does), it turned out decent. Now they really have to sit down and refine the whole thing, going to be a few revisions before we see the potential. They key part that people are glossing over is the scalability of going to a modular design. This is something both IBM and SUN have been doing for over a decade now. RISC ISA's do lend themselves better to scalability, no need for an instruction decoder and your schedulers are vastly simpler. That AMD is even attempting that with a x86 design shows they got ballz, or their chief engineer is insane, possibly both.
 
OK, well you just killed my argument that nobody here calls BD "junk" or a failure 😛.. :kaola:

Seriously though, while it may not be suitable for your needs, it does seem to sell out on Newegg. Of course, that could be Malmental buying them for his 990FX board 😀..

I've seen several other posters who got ambushed by the BD hype and have publicly regretted buying an AM3+ board when they had a perfectly good AM2 or AM3 board already. Mostly gamers I think.

And I don't really blame JF-AMD for the misinformation either - according to his last post ever on Anandtech's forums, he was told by the AMD engineers that IPC would be higher than on Phenom 2. I guess the same engineers who miscounted the number of transistors by 600 million 😀..

Well if there was any good things about it through that the new board has twice as many sata ports, 8 instead of only 4. Has 16x.16.4x sli/crossfire and isn't some Gigabyte am2+ that I had before it. When the day finally came and the first reviews were posted ( THG first) I was very much disappointed. Then looked at the power figures and wasn't thrilled. As many rigs I've built and worked on as what others have posted over the years even a novice can say BD is crap. Sure they sell well as there is very little P2 x4/6 available atm and more mhz/ghz to most people means better. That 8 threaded power sucking hog beast of a cpu can't even compete what is generations old tech let alone first gen i5/7. I wouldn't be surprised if someone had manged to get a E version of Phenom 1 over 3ghz to find that it beats the FX 4100/6100 at their stock clocks.

FTL (For The LoL)

cartoon-gifs-slow-down.gif
 
While I admit I didn't find the article outrageously funny, I did see it as a humor article, not to be taken seriously. IIRC Cleeve said the same thing while he was on AMDZone. In fact I think that's the reason why he went there to post in the first place - to dispell any notions that it was a serious article.

Also IIRC he put Intel's Itanium up on that list (which had more Intel stuff on it than any other company's), and yet up until recently Itanium was generating $4BN in revenue each year - i.e., about the same as all of AMD. I didn't see anybody getting their shorts in a twist over that..
I think Keith's argument was pretty much AMD has never had any failures and even those it has had, should never be brought to the public's attention, let's just focus on Intel's failures only. :lol:

And you are right that you don't see anyone crying about the Intel listed failures in Don/Cleeve's article.
 
I hope this doesn't happen to AMD again like it happened with BD and the M2 like what had happened to Cyrix.

"The 6x86 successor—MII—was late to market, and couldn't scale well in clock speed with the manufacturing processes used at the time. Similar to the AMD K5, the Cyrix 6x86 was a design far more focused on integer per-clock performance than clock scalability, something that proved to be a strategic mistake. Therefore, despite being very fast clock by clock, the 6x86 and MII were forced to compete at the low-end of the market as AMD K6 and Intel P6 Pentium II were always ahead on clock speed. The 6x86's and MII's old generation "486 class" floating point unit combined with an integer section that was at best on-par with the newer P6 and K6 chips meant that Cyrix could no longer compete in performance."

http://en.wikipedia.org/wiki/Cyrix_6x86

http://en.wikipedia.org/wiki/Cyrix
 
Well, well... Looks like my phrase, the first day the benchies arrived, still stands: "it's a bloody sidegrade".

I do encode stuff for my GS2 all the time (where the FX8 might shine), but i let the Virtual Dub run in batch mode at night, so time is not such a concern to me. Games is trade blows here and there and power consumption seems kinda equal.

Just remember to keep the arguments clean, folks. No name calling and bashing.

Cheers!
 
I think Keith's argument was pretty much AMD has never had any failures and even those it has had, should never be brought to the public's attention, let's just focus on Intel's failures only. :lol:

Sorry to disappoint you but I wouldn't make such an argument or even pretend that it was true.

You also probably also haven't noticed that I rarely ever focus on Intel's failures; for the same reason I don't generally post in threads that pertain to Intel chips. I don't feel the need to go into those threads since the subject is rarely interesting. (That makes me wonder why the reverse is not true for Intel fans.)

I also have no problems with focusing attention on any of AMD's failures. But when something isn't really a failure it gets old hearing the same people continually pretend that something is a failure by using the same tired arguments. Some people seem to have a hard time understanding that there is a difference between something not performing as well as it was originally hoped and it being an actual failure. Those people usually seem to thrive on the failure. But what is really bad is when people that never planned on buying a certain product somehow become major arbiters of whether the product is a success or a failure. (It's bad enough when people that either bought a product or planned buying it complain about it; it is just worse when people that have no experience with a product sit around and complain.)

 
not to real techs... :non:
I need benches included..

but in business sense, OK..

I'm pretty sure P4 outsold Athlons. I wouldn't estimate by how much, but I'm pretty sure P4 sold more than Athlons did. Anyone cares to prove me wrong or support my point? Either works, lol.

And if we start measuring a product "business" wise to call it successful or not, we better go to a financial site where tech specs are just a minor mix/add-on of the whole picture, lol. Here we care about substance! (sometimes, lol)

Cheers!
 
that was funny Mal

Of course benches matter to us
but just realize to a corporation sales figure matter
look at the Athlon XP vs Pentium 4 or Athlon64 x 2 vs Pentium D days
sure the AMD were superior
but I dont think Intel cared since the Pentiums sold well

I am rooting for AMD
I am not a fan boy
I was happy with my Core2duo 3ghz also
it is about the best performance I can afford
whether AMD or Intel
I currently have a AM3 and a AM2 rig right now
will stick with my AM3 and go Thuban BE or 960T used eventually
but also if I was building a rig from scratch even a budget rig
I would lean towards 1155

but I really hope that Piledriver/Excavator etc will raise the bar
I am mad at AMD for EOLing so many PHIIs
I thought that was premature and was just done so the PHII didnt compete with FX
which from a business standpoint was the smart move financially
who wouldve bought a FX 4100 when they couldve got a PHII 955 BE for about the same money
but definitley showed that AMD doesnt care about the custom builder
 
I'm pretty sure P4 outsold Athlons. I wouldn't estimate by how much, but I'm pretty sure P4 sold more than Athlons did. Anyone cares to prove me wrong or support my point? Either works, lol.

And if we start measuring a product "business" wise to call it successful or not, we better go to a financial site where tech specs are just a minor mix/add-on of the whole picture, lol. Here we care about substance! (sometimes, lol)

Cheers!
Of course p4 outsold athlons. The real queation is would it have if intel wasn't bribing oem vendors and strangling supply to mom and pop shops that sold amd products?

In the long run, it cost intel what 3+B in fines to stay ahead of the game, while still netting 30+B, that's a smart marketing move even if it did cost them some bad publicity. Its not like intel can be thrown in jail so why not break the law if it devastates your competiton beyond repair.
 
Haswell (1150 due next year) is DDR3 only and I doubt they will be replacing 2011 that fast just to be able to make use of DDR4 but I am guessing that it might be 2015 before DDR4 becomes mainstream. By then AM3+ and 1155 will be long dead so what is going to use it first I don't know maybe AMD will get there or Intel in it's high end server platform but dumping 1150 (Haswell) and 2011 so fast won't sit well with most.

They don't have to throw the socket away. They could just do the same as their 4 series chipsets that had both DDR2 and DDR3 controllers in them, or like the Phenom IIs that also had both controllers in them and offer both boards. Then companies could offer hybrid DDR3/DDR4 boards to allow a upgrade path.

Other than the "clock mesh" announcement recently I don't think there have been any details yet.

I thought there was going to be a new FM2 socket for Trinity but that appears to have been scrapped for now.

It still is FM2 for Trinity, that hasn't changed yet. I can see why, same thing with AM3+ only with BD. Different arch that has a vastly different pin layout. There were people who thought they had AM3 mobos that supported BD but they either did not or had a AM3+ socket on a 800 series chipset, even some 700 series chipsets, as the chipset of 900 series was the same as the 800 series.

I stated it before, if it was a black socket it supported BD, if it was white it did not.

Not to be a party pooper, but I still haven't seen a low profile 6670 (not even Sapphire =/), so they might be a great combo (which is good), but they still don't fit in my lil' HTPC case XD

Also, the MoBo price... Did it have the same set of features as the Asus? I mean, I have that MoBo and it has a TON of things. It's one of the few integrated audio chipset with almost no annoying signal noise. I have it attached to a pair of studio monitors, so it's is very noticeable.

And yeah, the A8 heats like a stove, nothing to argue there 😛



I really feel bad when I think about that, mal. I'm a FM1 and AM3+ owner, lol. Well, it's more like "owned" at the moment, hahaha.

Cheers!

http://www.newegg.com/Product/Product.aspx?Item=N82E16814161397

There ya go. Would be nice and comfy in a HTPC it looks like.

No, people trashed BD because outside of heavily multithreaded apps, it isn't any faster, and in many cases is SLOWER then the previous architecture. Meanwhile, AMD promised it would be significantly faster then it was. And killed its own competing architecture so everyone would have no choice but to move to BD.

Meanwhile, BD is priced the same as the i5-2500k, which beats it outright in the majority of benchmarks out there.

And no, the "it will improve as software gets more threaded" argument does not work, because as I have already noted, software simply does not and will not scale beyond a few cores.

Finally, everyone here can not recommend a processor on the hope it *might* be better a few years down the road.

What you are doing, rather then making an argument on why BD is better then everyone thinks it is, are instead attacking everyone thats been pointing out the obvious flaws in its design.

Don't even try. keithlm tends to always try to turn it against you. best to just ignore. When Barcelona failed to beat Kentsfield per clock and per core with the "superior" monolithic design, it became a matter of it "feeling smoother".

Just do as I will this time around, ignore.

Toms benchmarks resulted in exactly what I said it would, favoring Intel whenever you can install a low to medium dGPU. The GPU and the Intel both had separate heat sinks and fans, thus they didn't share the same thermal headroom. The APU on the other hand must constrain both it's CPU and it's GPU operations within the same thermal envelope. With low to medium desktop dGPU' being cheap like they are, there is little reason not to have one. And thus APU on a regular desktop doesn't make sense. They APU may have more theoretical performance (four complete cores) but the Intel unit has better performance per core and a dedicated GPU which is superior to the APU's IGP.

Now that all being said, Toms picked the 3870 for a reason, it gave them the highest amount of money to add in a superior GPU to the Intel chip. Try that same setup again but instead use something a bit more realistic.

109.99 A6-3650
http://www.newegg.com/Product/Product.aspx?Item=N82E16819103943

79.99 A6-3500
http://www.newegg.com/Product/Product.aspx?Item=N82E16819103951

Now if we're talking HTPCs and other appliance devices, that's a slightly different story. Although still shouldn't use a 3870.

Actually Toms was trying to take the top of the line APU and just see how it stacked up. And they priced a Intel CPU with a discrete to be the same price to see that. They made not that the CPU did perform better in multithreaded apps, as is expected 2 vs 4, and that the IGP is not the best for gaming at $140 if you can get a dual core and better discrete GPU for the same or a bit less.

I see nothing wrong with it. I would have prefered to see results with a GPU closer to the 3870s but it would have been cheaper in price.

But you are saying the same as Toms basically did for gaming. If you can get a better option, why buy the A8 3870K?

I am sure if the A8 won you wouldn't be using it as an example for this.

how did you spend $100 on a 6670 when toms bought it for $70? the i3 2120 is $130 and the 6770 is $100= 230 total. The A8 is $140 and the 6670 is $70 making 210 total.

http://3dmark.com/search?resultTypeId=232&linkedDisplayAdapters=1&cpuModelId=1315&chipsetId=688

http://3dmark.com/search?resultTypeId=232&linkedDisplayAdapters=1&cpuModelId=1090&chipsetId=570

So why pit the i3 with a more costly GPU and the A8 with a cheaper GPU? Is it to give an advantage to the A8 so it can hybrid CF and get better performance? Why not take the A8 with a 6770 and the i3 with a 6770?

Thats like the people who didn't want to compare Phenom II to Nehalem and instead only Kentsfield/Yorksfield or make it an uneven playing ground based on whatever made AMD shine.

BTW, 3DMark is not the best way to show real world performance.

http://www.ebay.com/itm/Authentic-AMD-FX-4100-Processor-Key-Chain-/260968373598?pt=CPUs&hash=item3cc2ed5d5e

Nice. I heard Intel does this with their bad CPUs.
 
Of course p4 outsold athlons. The real queation is would it have if intel wasn't bribing oem vendors and strangling supply to mom and pop shops that sold amd products?

In the long run, it cost intel what 3+B in fines to stay ahead of the game, while still netting 30+B, that's a smart marketing move even if it did cost them some bad publicity. Its not like intel can be thrown in jail so why not break the law if it devastates your competiton beyond repair.

Yes it would have. AMD, and I have shown links to this multiple times, was capacity strained at the time. So those "mom and pop" shops that Intel was somehow causing the strain to was actually AMD more than anything. When they started to get large contracts from large OEMs, like Dell, HP etc, they decided to get their CPUs to them instead of those mom and pop stores.

Still AMD is a angelic company that would never turn their backs on the little guys if it meant more profit.......
 
Yes it would have. AMD, and I have shown links to this multiple times, was capacity strained at the time. So those "mom and pop" shops that Intel was somehow causing the strain to was actually AMD more than anything. When they started to get large contracts from large OEMs, like Dell, HP etc, they decided to get their CPUs to them instead of those mom and pop stores.

Still AMD is a angelic company that would never turn their backs on the little guys if it meant more profit.......
Is this the only arguement people can come up with to try and defend intel? Seriously, dell wasn't selling any amd till intel got sued. During the time in question, amd couldn't sell what they had, and had to shut down fabs.

Ya, shutting down fabs whie your capacity strained ... and on top of the performance.

you don't close fabs when you sell everything you make, but the after effect makes it look like you can't keep up since your short on fabs because you had to close them rather than upgrade.

Intel made that happen. Block sales till you see your competition cut its productuion in half then point and laugh when their sales suddenly triple when you lift the embargo. Keeps people from paying attention to the lawsuit.
 
Is this the only arguement people can come up with to try and defend intel? Seriously, dell wasn't selling any amd till intel got sued. During the time in question, amd couldn't sell what they had, and had to shut down fabs.

Ya, shutting down fabs whie your capacity strained ... and on top of the performance.

you don't close fabs when you sell everything you make, but the after effect makes it look like you can't keep up since your short on fabs because you had to close them rather than upgrade.

Intel made that happen. Block sales till you see your competition cut its productuion in half then point and laugh when their sales suddenly triple when you lift the embargo. Keeps people from paying attention to the lawsuit.
What were these fabs that were being shut down by AMD?

Were they old fabs that couldn't make their latest CPU's?

In all the time I have been following this debate, this is the first time I have seen anyone state that AMD had to shut down fabs(especially those that would have produced their CPU's in demand).
 
Sorry to disappoint you but I wouldn't make such an argument or even pretend that it was true.

You also probably also haven't noticed that I rarely ever focus on Intel's failures; for the same reason I don't generally post in threads that pertain to Intel chips. I don't feel the need to go into those threads since the subject is rarely interesting. (That makes me wonder why the reverse is not true for Intel fans.)
What you fail to constantly get is Keith, just because you are crazy in love with AMD, it doesn't mean that people who currently have an Intel based system are just as in love with Intel.

Most people who are active on computer forums and have Intel systems, are simply crazy in love with computing, and will take which ever company's products work best for them.

They are not on some immature crusade of sticking to one company no matter what.

So people who are crazy in love with computing, will comment all over the place on computing products, not just stick to the makers of their currently owned product.

It also means that when a diehard of one company is making outlandish statements that have little to no basis in the real world, they will face opposition from people who find their crazy claims to be an affront to logic and reasoned debate.
 
Is this the only arguement people can come up with to try and defend intel? Seriously, dell wasn't selling any amd till intel got sued. During the time in question, amd couldn't sell what they had, and had to shut down fabs.

Ya, shutting down fabs whie your capacity strained ... and on top of the performance.

you don't close fabs when you sell everything you make, but the after effect makes it look like you can't keep up since your short on fabs because you had to close them rather than upgrade.

Intel made that happen. Block sales till you see your competition cut its productuion in half then point and laugh when their sales suddenly triple when you lift the embargo. Keeps people from paying attention to the lawsuit.

I remember someone posting an article a bit back, the last time this came up, that actually showed AMD closed the FAB before they claim the Intel stuff happened, I think it was because their new Dresden FAB was coming online at the time. I wasn't defending AMD at all but was rather pointing out the one thing some people miss, even to this day. Intel no matter what has more production power than AMD having more FABs than AMD.

Even if AMD was selling every last CPU, when they cannot meet the supply and demand the companies will go elsewhere to meet that said demand.

Then add in Intels marketing, being more well known and that means they will still sell even more.

I think if AMD didn;t buy ATI right after Core 2 hit and Barcelona was a success, it could have been a much different story. I think AMD would have profited and had the funds to build more FABs and put more into R&D.

But that didn't happen. They overpaid for ATI, Barcelona was not successfull, especially with the recall on server parts, and so AMD started taking losses which hit them hard as all get out. If the same events happened and Intel didn't "bribe" the OEMs, AMD still would have lost market share and still would have had losses.

Still, this is what I can say: intel and AMD have gotten over it, everyone else can too.
 
No, I simply pointed out that as a whole, software does not tend to scale well. Period. As such, I viewed a massivly multicore design as a way to improve overall performance as fundamentally flawed. I can say this, because of years of doing SOFTWARE DESIGN. Same concept as why I bashed Intel's larabee when it was announced, as I found the concept of using a few serial processors to make a massivly parallel chip close to idiotic in all honesty. If Intel also went in the direction of a massivly multicore chip, I wouldn't hesitate to bash them as well.

The fact is, since the majority of the software written is not going to scale particularly well, adding more cores is not going to significantly improve performance. Farther, since you get diminishing returns, one could argue that adding more cores is actually a waste of die space after a certain point in time. [I note servers are a different beast, since you tend to have more then one heavy duty app going at one time. BD would be competitive in the server space, as I've said for a while now.]

As for the rest of my BD projections, I looked at the Pentium 4 for comparison, since BD used a LOT of design principles that were simmilar to the P4 approach [long pipeline for higher clocks, at the expensive of IPC, a form of SMT, etc]. That gave me a baseline for change in performance. Then I looked at how AMD implemented those concepts, and made my conclusions about drop in performance. PS: I was right.

At the end of the day, I simply find the concept as adding more cores to increase performance as fatally flawed on the desktop. On this point alone, I consider BD as fundamentally flawed. Combine that on the reliance on high clocks, which are limited by power/thermal constraints, this leaves BD with very little forward progress to work with going forward. These two problems alone warrent a re-design in my mind, since even if every other problem with BD were magically fixed [which I doubt], BD still wouldn't be as good as SB.

Meanwhile, you make the argument that since BD is modular, AMD can simply swap out the offending components. You fail to ask some fundamental questions though: Why is the scheduler so bad? Why are cache latencies so high? What design changes need to be done to fix them? What design tradeoffs will result? And most importantly: If AMD could simply have swapped out the offending components without negativly affecting the rest of the processor, why didn't AMD replace those components in the first place?

Hold on a minute with that. Myself and my team built a program and we designed it to be many processes which communicated by way of a well defined IPC mechanism. It worked quite well. Not a high performance system like a game but it worked quite well. Very easy to build upon. I recall counting 35 or so processes. No one actually wrote a start up script so I wrote one in perl and everyone began to use it. Now this is an extreme case but we had our reasons for building it that way. My point is that software can be built this way. I am aware that you pay for such a design with complexities in synchronization and this takes some time to work out but I believe in it. I was at one point concerned about performance but we learned that the OS worked with it quite well.

I understand that some processes (the general term) cannot be broken down due however in general I very much believe that if you sit down and agree to do it this way you will be successful. While this does not entirely deal with the issue of making code execute faster, it should be part of the solution, especially today. You are too pessimistic on this.
 
When we talk about software scaling along with more cores we need to first discuss the nature of the instructions being used. Some types of code are naturally more serialized then others.

Take a bubble sort method, something that's been around since the dawn of IC's. Bubble sorts can not be parallelized very well due to each operation depending on the results of the previous operation. A CPU having 1 core and a CPU having 1024 cores will do the bubble sort in the exact time assuming all other conditions are equal. The CPU with 1024 cores has three orders of magnitude more power then the CPU with 1 core yet that capability isn't really expressed.

Most of the code processed by today's programs is written to handle one thing at a time. Their doing all their collision detection, physics, and graphics loading in sequence. It's a giant loop that checks a bunch of things in serial and signals a sub-process to handle something while the main loop waits for the return to continue onward. To get out of this requires the coders to think radically differently and start to program each component to run separately from everything else. This raise's some big sync issues that need resolved.

Ultimately computer code is just the virtual expression of the coders who create it. Most humans tend to think on a single train of thought at once, thinking about something else requires that they pause their first train and task-switch inside their brain. Before we see truly serial code we need to first teach ourselves how to think on multiple trains at once.

I agree so much with this. Believe me, the synchronization can be done, the tools are there. There is no cheating with this, though.
 
Status
Not open for further replies.