AMD Piledriver rumours ... and expert conjecture

Page 64 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 
phenom I was on par with c2q if they weren't plagued with the bug which took 10% of the ipc penalty to fix. phenom IIs are ahead of the first gen c2q which performs like athlon II X4s

http://www.tomshardware.com/reviews/processor-architecture-benchmark,2974-15.html

Per that its an even match with each at a single core running at 3GHz. Phenom II is basically trading blows with Core 2, first gen, and still beat by Core 2 second gen.

First gen Phenom vs First gen C2Q:

http://www.tomshardware.com/charts/cpu-charts-2008-q1-2008/compare,369.html?prod%5B1275%5D=on&prod%5B1308%5D=on

A CPU older than the Phenom by a year, not even "true" quad still beating it. Same clock speed, same amount of cores and there was no TLB patch. The patch was able to be turned off. These were first run results, the TLB patch came after and there was no reason for it for people who didn't run a server where it might happen.

Sorry but Phenom I missed the mark, Phenom II hit the mark Phenom I should have and it still took a faster clocked CPU to do that than the equivalent second gen C2Q.

http://www.tomshardware.com/charts/desktop-cpu-charts-q3-2008/compare,836.html?prod%5B2194%5D=on&prod%5B2163%5D=on

B3 Phenom I, 9750 (the 50 signified B3 as well as no TLB so no patch needed) still not better than a C2Q.

http://www.tomshardware.com/charts/2009-desktop-cpu-charts-update-1/compare,1396.html?prod%5B2619%5D=on&prod%5B2635%5D=on&prod%5B2607%5D=on

Phenom II, 45nm higher clock capabilities. Still not better than First gen C2Q, more like on par as I said, and still way behind second gen C2Q.

Phenom II did make up for a lot that Phenom I left to be desired. But it in no way kept up with Sandy Bridge, like I have seen said, nor did it "beat" C2Q. It was and is on par with C2Q, a CPU released 2 years before it was.

BD is another Phneom I. PD should be its Phenom II, hopefully at a minimum it needs to be. But something to completly smash SB and compete with IB? Considering Intel not pushing it out in Q1 like they did with previous CPUs, I don't think they are worried.
 
They used Opteron to streamline their lineup I would imagine they will allow the desktop apu to use any 7000 series cards.whether that is with trinity or Kaveri or later is undetermined.
but the HSA aspect is what really intrigues me.That up 113% is ominous
Plus the clock mesh .
... the (up to)113% improvement came from a simulated llano cpu with l3 cache. it has been explained why an entry level product like trinity would let go of l3 cache - to keep the price down. additionally, the softwares would have to be customized for that simulated llano for the improvement to take effect.
Why in the hell would you ever build a gaming desktop with an APU in mind? It's mind blowing. There is no "they" here, your taking a product and trying to put it in a role it was never designed for, then upon it failing that role you call the product cheap and write it off from everything.

APU's on a desktop are generally a bad idea, period. There is a small segment in the low-profile HTPC and that's about it.

The APU's intended competition is the ... drum roll .... Atom. APU is only offering them in the desktop segment for OEMs to play with.
3 ghz (oc) 4 core a8 3850 + 8 gb ddr3 1600 + 6850/gtx 560ti (after using the igpu for a few months) - 3 ghz would be higher than i5 2300's clockrate.
i know it's mind blowing.. because if it worked as a proper athlon ii replacement, it'd save me money (it was much cheaper than the other 4 cores at that time) and i'd have a capable pc. ...i am not writing it off, never did.
when llano came out, the desktop ones were pitted against core i3, not atom. apus cover from low end desktop to mobile. apus like brazos compete against atom while llano competes against atoms to pentiums, core i3 and (mostly mobile) core i5.
if amd doesn't offer apus for desktop, they lose a huge market for office pc, all in ones, low end prebuilt pcs.
 
How do you work this out?

HP was AMD's knight in shining armour for Server sales.

Hmmm...I heard rumors, note:rumors, that many vendors,( like HP), were 'threatened' by the Intel corp. that if they bought AMD products, that said vendors would be cut from cheap Intel deals. Basically, if you got AMD, you would miss out on sweet Intel-only prices not seen on market. Not necessarily price fixing but, you know...

*nudge, nudge*
 
For you? Nah. No need to. People know that is what AMD was trying for. They missed the mark. Probably GFs fault for screwing up 32nm.
figures


.




Thanks for the insider info since you seem to have Intels specific sales numbers.
don't need a weather man to know which way the wind blows


2014..... thats when the 14nm shrink of Haswell will occur. Not sure if Excavator will do well unless they at least hit 22nm by then.

haswell should be nice and so might excavator
AMD may be a few years behind but they do have good ideas
in fact they've been putting out the best ideas lately

.



They keep saying its GCN but early AMD slides have said VLIW. Guess we will see when it hits.



Games would still have to be coded for ARM though. Sure you could probably port Android/iOS apps to Windows 8 ARM but games like Skyrim etc would need a rework of the game engine to run on ARM instead of x86.
 
The figure below is a module of Piledriver.
- Hiroshi Goto (Google's translation).

...Stage 2 on the two front-end stage, middle stage is a core of two integer arithmetic, and the bottom has become a floating-point unit. Compared with the Bulldozer module below, placement of the unit itself does not change even Piledriver be seen. However, according to AMD, clock distribution is renewed, it has improved efficiency by 24%. Clock is actually accounted for 24% of the power of the CPU core, the effect is large."

10.jpg
 
AMD received Cyclos technology licensing

Recently, AMD ISSCC meeting Cyclos Semiconductor Corporation today announced its next-generation CPU processor is the latter "Resonant Clock Mesh (resonant clock network) technology licensing, the technology will enhance the" Piledriver "(hammers) APU, including the processor performance and lower power consumption.
Resonant the Clock Mesh uses electronic sensors to create an electronic clock inside the chip, known as the "tank circuit" the Cyclos sensors, and the clock control circuit recycling clock electricity rather than dissipated and wasted in each clock cycle, so you can reduce energy consumption 10 percent or more, this technology has been applied in the field of high-performance chips for many years, the cooperation is also RCM for the first time authorized the commercialization."

(by Chinese leaked source).
 
AMD received Cyclos technology licensing

Recently, AMD ISSCC meeting Cyclos Semiconductor Corporation today announced its next-generation CPU processor is the latter "Resonant Clock Mesh (resonant clock network) technology licensing, the technology will enhance the" Piledriver "(hammers) APU, including the processor performance and lower power consumption.
Resonant the Clock Mesh uses electronic sensors to create an electronic clock inside the chip, known as the "tank circuit" the Cyclos sensors, and the clock control circuit recycling clock electricity rather than dissipated and wasted in each clock cycle, so you can reduce energy consumption 10 percent or more, this technology has been applied in the field of high-performance chips for many years, the cooperation is also RCM for the first time authorized the commercialization."

(by Chinese leaked source).

huge 10-15%
cyclos has shown up to 30% is possible

a little ipc gain ,higher clocks, cyclos lower power consumption, HSA ,arm module,
a big uplift is coming for AMD's APUs
I think Intel is going to have its foundations shaken real hard
meanwhile IB delayed to q4 or 2013
AMD caught them sleeping at the wheel with mobile braos and lano seems like
trinity will cripple Intel mobile sales
Intel better hurry Haswell and it better have top flight igp
 
Thing with that is, quad cores today are probably a lot slower than quad cores in 3 years and you may be more "future proof", you will still have a rather slow computer when the cores are being utilized. I do believe we are moving into the quad core era tho as they are becoming the standard.
This is true, and i know that. If you try to walk up to a friend, who just wants to buy a new computer that will still be fast 3 years from now, and try to start explaining that this cpu is better than this one, he will either believe you, or not.

To someone who doesn't know much about cpu's, a Phenom 2 x4 at 3.7 Ghz probably looks faster to them than a i5-750 at 2.66 Ghz. They are both quad cores, but the PII is clocked higher.

Also, If they didn't have a friend who knew about this stuff, they would be inclined to buy the PII.

If a Cpu is faster now, then people expect it to be fast later too. As fast as the latest and greatest? no, but still fast.
 
huge 10-15%
cyclos has shown up to 30% is possible

a little ipc gain ,higher clocks, cyclos lower power consumption, HSA ,arm module,
a big uplift is coming for AMD's APUs
I think Intel is going to have its foundations shaken real hard
meanwhile IB delayed to q4 or 2013
AMD caught them sleeping at the wheel with mobile braos and lano seems like
trinity will cripple Intel mobile sales
Intel better hurry Haswell and it better have top flight igp
As long as AMD can meet demand with trinity.
 
but you'll have to look at 90% of the PC market that doesn't look at what CPU it's using, but what brand the OEM is......
even if they purchase DIY PCs, most of them still don't know what's an AMD.
Unfortunately. Apu's are starting to change that, it seems. It is pretty sad that so many people who buy PC's don't know who competes to make the parts. They just figure its all between the packagers. HP, Dell, Gateway. I don't have a single friend who knew what AMD was before I told them.
 
+1 in fact the Phenom II is slower per clock then the c2Q, it's just clock so high to make it faster per core.

http://www.tomshardware.com/reviews/processor-architecture-benchmark,2974-15.html

The performance per clock of Stars/K10 vs the Core 2s is HIGHLY program and OS dependent. Most of my heavy lifting is media encoding, compiling, and HPC type of programs and I run them on Linux. I have a few different machines at home (dual Opteron 6128, dual Xeon X5260, dual Core Duo T2500, a single C2D T7250) and fortunately they can all be run at one common speed (2.00 GHz) without underclocking any buses or RAM or anything. Linux also lets you shut off/turn on CPU cores on the fly. So of course I have benched them against each other with various programs that I use, such as parallel BZIP2, FFMPEG, various GMPlib HPC programs, and such. Core for core and clock for clock, the Opteron 6128 is the fastest CPU by quite a bit in most tasks. In general, a single 2.00 GHz Stars core performs about like a 2.67 GHz Wolfdale core in Linux, and at worst, it performs like a 2.00 GHz Wolfdale core. Perhaps the underlying platform had a little influence, but demonstrably not all that much as the 2.00 GHz C2D T7250 GM45 with unbuffered memory performed core for core pretty similarly to the 2.00 GHz Xeon Wolfdales despite the latter's high-latency FB-DIMMs. So, benchmarks all depend on what exact programs you run, don't over-generalize. 😉
 
The performance per clock of Stars/K10 vs the Core 2s is HIGHLY program and OS dependent. Most of my heavy lifting is media encoding, compiling, and HPC type of programs and I run them on Linux. I have a few different machines at home (dual Opteron 6128, dual Xeon X5260, dual Core Duo T2500, a single C2D T7250) and fortunately they can all be run at one common speed (2.00 GHz) without underclocking any buses or RAM or anything. Linux also lets you shut off/turn on CPU cores on the fly. So of course I have benched them against each other with various programs that I use, such as parallel BZIP2, FFMPEG, various GMPlib HPC programs, and such. Core for core and clock for clock, the Opteron 6128 is the fastest CPU by quite a bit in most tasks. In general, a single 2.00 GHz Stars core performs about like a 2.67 GHz Wolfdale core in Linux, and at worst, it performs like a 2.00 GHz Wolfdale core. Perhaps the underlying platform had a little influence, but demonstrably not all that much as the 2.00 GHz C2D T7250 GM45 with unbuffered memory performed core for core pretty similarly to the 2.00 GHz Xeon Wolfdales despite the latter's high-latency FB-DIMMs. So, benchmarks all depend on what exact programs you run, don't over-generalize. 😉
Like how Bulldozer performs very well in highly threaded programs.
If I were rich enough to have a single computer for video editing/rendering, BD is a good option.
It also performs well in photoshop.
 
I'm basing my info purely on what you've posted. This goes all the way back to the BD speculation thread. You've spread lots of BD hate even before it was released, about everything you've posted was "don't buy AMD, buy Intel instead". I actually can't remember you saying anything positive in relation to AMD during this entire time, even when they did do some things right.

That's bogus - all Chad & JS did (as well as myself) was counteract all the unfounded 'guarantees' of super performance made by the AMD fanbois, such as Baron who based all his 'findings' on having 'looked at BD's design' or some such nonsense. While JS and I were more conservative, Chad went out on a limb and predicted that BD would be a 'dud', and turns out he was pretty much spot on, in the sense it made little advance over AMD's previous generation.

Case in point, that whole line about AMD being successful in the light power-saving mobile sector. A place that they actually fit in perfectly, Sabine (mobile Llano) is just an under clocked Phenom II without L3 but more L2 and a AMD budget GPU bolted onto it. That combination has proven to be been made of win. It's received positive reviews from several sites, and great reviews from notebook specific sites.

While I agree that Llano is worthwhile for the budget-minded mobile user, you do realize that AMD promotes it as also suitable for desktop, etc. Also, note that AMD's margins remain low compared to Intel's - a sure sign that selling all those low-profit Llano's is not going to be much benefit to AMD's bottom line. Instead, they need to get back into the high-profit server market.

Anyhow, lets all agree to stop insulting each other. We're all intelligent enough to analyze the information presented and come up with our own conclusions. Don't want the thread to devolved into haters fighting each other.

Sez the guy who lost it and uncorked the insult bottle a few pages back, when his Prof. Knowzitall act was challenged with links and who failed to respond appropriately.

If you're here to learn and debate the facts and rumors, fine. But having tantrums in public and calling people fanboys when you disagree with, and mischaracterizing their positions, is not.
 
Last I checked AMD was pushing to put apu's in tablets+smartphones 😛.

?? IIRC at the analyst day, Reid did not make any commitment to either tablets or smartphones. While there was some discussion of licensing ARM I believe, no concrete statements were made, that I know of anyway. Previously, AMD stated fairly conclusively that they had no intention of competing in that space..
 
is that why 1000$ parts languish on the shelf ?

To quote you: "if you have a link please share it". IOW, just how do you know "$1000 parts languish on the shelf"?

only an Intelidiot would buy chips for more than 400$
but you have a point there are a lot of Intelidiots

There are a number of people who can easily justify spending $$ on the best workstation performance that they can buy, such as those who make a living creating video commercials or website content, etc. If they have deadlines to meet, or multiple projects to juggle, then they're likely to spend far more than the average gaming enthusiast here, esp. considering that if they are US residents, they get to depreciate their capital expenses.

But you are right, in that some posters, perhaps a few with e-peen issues, are willing to spend multiple thousands on a top-end gaming machine that will start going obsolete the minute they turn it on for the first time 😛..
 
Not to commit to a logical fallacy, but that seems to be the current trend. I still love AMD.

I believe what Triny did, to 'measure' Excavator's performance increase as "10X" that of BD, was compare the relative distances to the bottom of the chart. However he should be aware that (1) it's only an AMD marketing slide, no guarantee of anything, and (2) sometimes even charts that do have a numerical scale shown (unlike the one he refers to), will resort to chopping off the bottom 90% or so, to amplify the otherwise miniscule differences between what the chart is trying to compare.

Quite frankly, after Barcelona and now Bulldozer, I find AMD's marketing credibility relatively low compared to Intel's, seeing as how Intel often comes much closer to the mark. So I'll believe it when the independent 3rd party reviewers say it's so. Sorry AMD, but you dug your own hole..
 
I believe what Triny did, to 'measure' Excavator's performance increase as "10X" that of BD, was compare the relative distances to the bottom of the chart. However he should be aware that (1) it's only an AMD marketing slide, no guarantee of anything, and (2) sometimes even charts that do have a numerical scale shown (unlike the one he refers to), will resort to chopping off the bottom 90% or so, to amplify the otherwise miniscule differences between what the chart is trying to compare.

Quite frankly, after Barcelona and now Bulldozer, I find AMD's marketing credibility relatively low compared to Intel's, seeing as how Intel often comes much closer to the mark. So I'll believe it when the independent 3rd party reviewers say it's so. Sorry AMD, but you dug your own hole..

Just that lil' bold line... Intel helped AMD a lot in that feat, cutting it's profit quite a lot for a couple of years 😛

Anyway, I'll stick to the "enough CPU to let the GPU do it's job" with Trinity. If it's more than that, it will be very welcome indeed. I just hope VCE handles 4K decoding 😛

Cheers!
 
Resonant Clock
HSA
Arm module
they are ganging up on Intel

http://www.eetimes.com/electronics-news/4215518/ARM-working-on-AMD-to-drop-x86

ARM would be a TERRIBLE idea for AMD. That's why they keep passing on it. Read the end of the article. "Hitherto we haven't been successful."

That would make them more like Nvidia and the Tegra platform, which NVidia LOST money on and have had years in development. Maybe Tegra3 will make money, but that's a 3rd generation product.

It's too late for AMD to start using ARM. They wouldn't get revenue from it for 3-5 years.

 
?? IIRC at the analyst day, Reid did not make any commitment to either tablets or smartphones. While there was some discussion of licensing ARM I believe, no concrete statements were made, that I know of anyway. Previously, AMD stated fairly conclusively that they had no intention of competing in that space..

- Enjoy brilliant visuals in an HD tablet powered by an AMD APU..."

http://www.amd.com/us/products/notebook/apu/Pages/tablet.aspx

You can't miss the video.
 
fazers_on_stun
on a roll this late morning aren't ya chap...

covering all the stuff you missed since last here.
(hit man / cleaner)
LOL.

Actually, after seeing the THG news article some 10 days ago or so, I downloaded Dragon Age Origins and DA2, complete with all the extra premium content, for the low low price of $12 and have been playing them over again when not busy with work. 😀

However today I had to stop since my wife is getting tired of being a gaming widow again. But she got called into work (this is supposed to be both our day's off), so just as soon as I hit the 'submit' button, guess what I'll be doing.. 😛
 
ARM would be a TERRIBLE idea for AMD. That's why they keep passing on it. Read the end of the article. "Hitherto we haven't been successful."

That would make them more like Nvidia and the Tegra platform, which NVidia LOST money on and have had years in development. Maybe Tegra3 will make money, but that's a 3rd generation product.

It's too late for AMD to start using ARM. They wouldn't get revenue from it for 3-5 years.

the way AMD is using new ideas you never know
what they'll roll out next .They have a third party dock on board
I doubt they went to the trouble of devising the on chip
3rd party dock for nothing.
could be a good power saving benefit for ulv solutions



 
Status
Not open for further replies.