AMD CPU speculation... and expert conjecture

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


Oh yeah, but those are just implementations from nVidia and AMD's graphics division. I used Lucid's MVP, because it's another solution that actually manages to deliver something interesting without using hardware knowledge from nVidia or AMD specifically (AFAIK). They just squeezed performance out of the integrated GPU with another paradigm. AMD and nVidia need to open their minds I guess :p

And you're not alone in that sentiment. Most enthusiasts, except for SLI or CF ones, don't use multi GPU and go directly to a high tier card. That's what I do at least as well.



AMD has been using 940-ish pins since 2003 (aprox) with socket 939 and been improving over the same layout by re-arranging the pins here and there. That's good and all, but I want them to produce something faster than the crappy HT link they have now and give us a new platform for the future (APUs, remember?). Also, to include native USB3 without hogging the whole PCIe BUS, you DO need more lanes, same for integrated sATA controllers. If you want to move more NB logic into the CPU, same deal. There's a lot of "little important things" that point to a socket change, more than a new chipset (since one implies the other sometimes, haha).

If its either a socket change or a chipset change, they have to re-arrange things to get more stuff packed in.

Cheers!
 

Chad Boga

Distinguished
Dec 30, 2009
1,095
0
19,290


You are a very confused individual.

Being able to assess that one company is better placed than another, is not worshipping the better placed company. It is honestly analysing the landscape.

However what would one being the ONLY person to foresee completely unrealistic problems for one company be called?
 

i think they're adding wrong....? then again, think about where bd set the bar....

anywho, i hope these xbitlabs guys turn out to be wrong about the slide donanim haber published.
http://www.xbitlabs.com/news/cpu/display/20121105000202_AMD_s_Roadmap_Slide_Does_Not_Predict_Steamroller_in_2013.html
they're saying that 'richland' i.e. pd cpu core with gcn igpu will replace 'kaveri' apus (steamroller cpu cores with gcn igpu). no mention of fabrication process.

AMD Releases New Opterons as CIO and Corporate VP Leave the Company
http://vr-zone.com/articles/amd-releases-new-opterons-as-cio-and-corporate-vp-leave-the-company/17735.html
AMD's New Piledriver Opterons Claim to Match Intel's Performance at Half the Price
http://www.dailytech.com/article.aspx?newsid=29118
AMD’s Bulldozer core compared with Piledriver
Core vs core vs the promised numbers
http://semiaccurate.com/2012/11/05/amds-bulldozer-core-compared-with-piledriver/
 


"upto", meaning in special case, in order to drive up excitement and early sales numbers.

Seriously, 45%? Think people.
 


30% is pretty large from any company. Intels Core 2 was massive due to the lackluster Prescott core vs Northwood and because AMD sat on K8 for too long enjoying the spoils of victory without moving forward.

As for Haswell, 10% seems a bit low. I would assume it could be anywhere from 5% to even more once apps start using the Transactional Cache system. Plus this is a "Tock", not a "Tick" meaning unlike IB which was a shrunk SB, with a majorly tweaked GPU, this will probably give more gains than SB->IB did much like SB gave more gains than the 32nm shrink to Nehalem did.

That said, considering that PD was delayed so long, and the fact that we cannot even find them yet (I didn't even see them on Newegg until recently) for our store to carry, I doubt SR is going to come anytime soon. That added in with the fact that such a quick release would undermine and probably kill any chance to recoup R&D and manufacturing costs as who would buy PD when ST is just around the corner? It would be like buying a new car knowing the next better one is coming out in a few months for the same price.

I also doubt we will see much about it apart from "rumored" leaks which will have to be judged accordingly since, well some may be true and some may be false.

Either way, AMD needs to start pumping on all cylinders or they might just be left in the dust, as they are already eating some of it already.

As for the socket/chipset AMD is very behind. They don't have native USB3 nor PCIe 3.0, and yes I agree that most are useless as Thunderbolt will easily replace USB3 when it becomes cheaper and more mainstream (already in some mobos for AMD and Intel) and PCIe 2.0 is barley saturated even by dual GPU cards. Still people like to have the features there and I like knowing that I can throw a IB CPU in my mobo and get PCIe 3.0 speeds with it.

Maybe AMD has a new chipset planned soon. Maybe not.
 
After reading the earlier slide I am now convinced that Steamroller and Excavator will have a new Socket/Chipset or unified one, another interesting factor is that Southern Islands is refered to as Radeon 1.0, the slides show Radeon 2.0 which implies GCN 2.0 cores, targeting a hybrid HD8700 core could see power/performance raise substantially.

Anyways I don't really think there is any point in comparing X86 to Intel as AMD have basically implicitly pulled out of that, the fair comparison will be whether it is significantly better than prior releases.

So to take that link, the APU's out next year are not Steamroller APU's but refined Piledriver based APU's. Again I imagine a unified socket and new chipset for steamroller, which is understandable considering AM3+/FM2 will seriously hold back the SR based chips.

I don't really see AMD doing much until the European/America recission is relieved, the PC market is deminishing at a rate of knots so its pointless to release SR amidst a global recession. By the time EU and NA are out of recession HSA would have matured a lot. I can now theoretically see AMD holding back SR until the market is on a upward curve.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860

Ive given links to several people who speculate that there will be a thermal limit, the only unrealistic problem is your blind ambition to pretend that Intel cannot ever be questioned.

http://www.theinquirer.net/inquirer/news/2171299/intel-admits-ivy-bridge-chips-run-hotter

"Intel said the shrink to the 22nm process node leads to higher temperatures due to increased thermal density,"

ya, thermal density is an unrealistic problem that no silicon in the world ever has and ever will, even that story about Intel stating that 22nm has a higher thermal density must be fake.
 

Chad Boga

Distinguished
Dec 30, 2009
1,095
0
19,290


I'm sorry that you don't seem to be able to grasp that most problems have solutions, and that due to your inability to control your wishful thinking, you have made yourself look silly thinking Intel isn't going to have a solution ready for 14nm.

Broadwell can be your friend.

 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810



A major change in HW is the doubling of internal bandwidth + doubling the width of some registers for AVX2 , FMA extension sets.
BUT, these have been present in BD for some time now (along with XOP), but no software is using them! Linux, which has better compiler support for AMD, and BD/PD dont suck so much on it. In fact they are almost competitive. But even in linux, no software uses FMA/AVX/XOP instructions to positively gain performance. Phoronix recently did a comparison of PD performance with compiler tuning. None of the software it tests uses AVX, let alone more exotic XOP and FMA.

SB has been with us for some time now. And yet very few softwares use AVX. Maybe when big brother intel adds FMA, more coders will code keeping that in mind. But even then, use of these new ISA are very limited.

Ido not know how well software developers are looking at the two versions of Intels TSX. Maybe Gamerk316 would tell us how feasible/attractive this is from a coders perspective ? IIRC, one is a simple recompile. The other is adding some keywords in code before loops.
 
SB has been with us for some time now. And yet very few softwares use AVX. Maybe when big brother intel adds FMA, more coders will code keeping that in mind. But even then, use of these new ISA are very limited.

Again: Developers do NOT manually insert special CPU opcodes. Its not the 1980's anymore. If AVX or SSE are used, its almost always via automatic instruction replacement/optimization during compilation. Manually inserting CPU opcodes is very rare, limited to a handful of applications.

Point being: Until companies upgrade from Visual Studio 2003/2005 (or heck, Visual C 6!), you aren't going to see a lot of apps using this functionality.

Ido not know how well software developers are looking at the two versions of Intels TSX. Maybe Gamerk316 would tell us how feasible/attractive this is from a coders perspective ? IIRC, one is a simple recompile. The other is adding some keywords in code before loops.

The issue's are the same with OpenMP: You can break a LOT of things in a lot of un-reproducable ways if you aren't very careful. I do see TSX catching on to some extent, but I think OpenMP will be the standard bearer.


Also: Kaveri slips to 2014:

http://semiaccurate.com/2012/11/06/amds-kaveri-apu-slips-again-2014-now/

EDIT

bawchicawawa beat me to it. :(
 
oh come on... :(
2014?!? that sucks....for me, because i was wondering how 28nm kaveri would fare against 22nm haswell. previews looked promising too. although, i suspected glofo (instead of amd themselves)being the major cause of kaveri delay if there would be any.
noticed that s/a cites their own sources (and they posted something similar way before) unlike others who have been analyzing the leaked roadmap by donanim haber. two different rumor sources still make a rumor.... but it did give the delay some ground.

on a related note, if kaveri is indeed delayed, it might end up competing against broadwell. broadwell, which is supposed to receive major igpu upgrade like ivb did (haswell gt3 is just more shaders slapped on the back of the usual igpu). so... all the enthusiasm(!) over kaveri 'kicking' haswell's 'ass' seem funny now... thank amd for that! :D
it may be the right decision for amd - compete with jaguar instead. bobcat was successful.

edit: what i'm trying to say is: amd will still update the apus with 'richland' and jaguar in 2013 even if kaveri doesn't make it. imo jaguar has better potential than kaveri. right now, both llano and trinity apus are available, i think trinity and llano are cannibalizing each other. by 2014, amd will have a better tweaked and cleaner market (if they don't screw up richland marketing and supply). too early to feel down. :)
 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810


But compiler cant vectorize every piece of high level code you write. The coder has to give some keywords to the compiler to tell it to vectorize specific piece of code. Or change their loops to be more data parallel, which the compiler can optimise.Or change the coding style to make it better accessible to compilers. Or use a library to which has this implemented. But then someone has to write that library as well. So why arent library developers/ low level coders using AVX ?
 


Jaguar seems the bigger puller right now along with Trinity, guess Trinity 2.0 will be a rehash and somewhat of a guinea pig run for Kaveri. Still think Steamroller will have a new socket and chipset for FX and APU based chips.
 




Hmm, this is close to what I was saying in the now-defunct Piledriver thread, just a few days ago - Steamy would be delayed until 2H 2014.

Of course, the S/A article goes a bit further and speculates it may wind up canceled entirely, which is what OBR was saying just a few months ago :p..
 


S/A says nothing new out next year, and also AMD is trying to revise Steamy to be more competitive with Haswell:

Between Haswell’s graphics capabilities and their power savings potential, it likely became clear that Kaveri wasn’t aiming for the right ballpark. Between AMD’s self-inflicted wounds…. errrr…. roadmap delays and a reevaluation of the competitive landscape, Kaveri kept slipping. What was meant to ship about now became a 1H/2013 product, and as early as last July, SemiAccurate moles were bringing up the dreaded R word.

Yes, Kaveri was being “reevaluated” back then, ostensibly to make it into something more competitive against Haswell. Pushing out a chip to tweak it like this is almost always a losing game, but if done well, it can have better results than the original plan. Ostensibly, that was the situation with Kaveri, and as of summer 2012, it was still in a great deal of flux. That said, it was unquestionably not a 1H/2013 product at that time, the ‘Richland’ update to Trinity had already taken it’s place.

More recently, SemiAccurate’s moles have come back and said that Kaveri has slipped yet again to 2014 if it is still alive. Big if. That would put the chip 18 months late best case, and then you have to ask yourself why not just skip it? Time will tell what happens, but one thing that is not in question is if there will be an updated APU in 2013. That answer is definitely no, Intel has the game all to itself next year.
 
^^ like mr. burns? :p
imo jaguar is better positioned for competing against both clovertrail and ultramobile haswell. now it depends on how amd markets them.
otoh this means that amd has decided to 'ignore' desktop for the time being.
 


Well if Kaveri does see the light of day, I'm sure that it'll get benched against Haswell as well as Broadwell, assuming Intel sticks to its roadmap and 22nm Haswell out in April next year, 14nm Broadwell out in April 2014 and 28nm Steamy out say September 2014. Much more delay than that, Steamy will be going up against the 14nm tock successor to Broadwell, whatever that is.
 


If you read Charlie's tweets following the article, seems like he is just really disappointed in AMD, not gloating in their problems. He obviously doesn't like Rory Read, and the fact that two more AMD senior level employees just jumped ship seems like maybe his viewpoint is valid. Dunno if they were forced out due to the layoffs, Read wanting to replace them with his own people, or they just left for a better future. Or all 3 for that matter..
 
1] there are trinity updates for 2013 so its not entirely without revision

2] if the goal is to allow better technology to be available to you then hold out for it, from what I last read AMD has 1 more year with the now defective GLOFO, after that they can move away, why put SR on strained GF SoC when the possiblity of TSMC after 2013 may be a realistic possiblity, this bodes better as TSMC have a better process than GF and of course Radeon GPU's will be on the same process, possibly both on 22nm which again is much better than what AMD have with GF.

3] Why the rush, certainly better time and endeavor spent on R&D is more valuable to AMD right now than just releasing for the sake of it. Jaguar and Richland for 2013 to re-evaluate the APU platforms, Trinity 2.0 should be aptly suited to deal with the efforts of HD5000
 


1: Because its a waste of time for very little performance gain in applications that are for the most part not performance sensitive.
2: Because most developers don't know HOW.
3: Lack of compiler support (again, Visual C 6 is probably still the most used Windows compiler, which is SAD, but true).

You aren't going to see developers waste their time developing something like this:

if AVX_Supported then
...
else if SSE_42_Supported then
...
else if SSE_41a_Supported then
...
else if SSE_41_Supported then
...
else if SSE_3 supported then
...

and so on and so forth. Its a waste of time that could otherwise be used to hammer out the other outstanding bugs that exist. Sure, you have a handful of applications (games/encoding) that actually need the performance gain, but for everything else, whats the point?

This is the same exact reason why if you have a 'for' loop that looks like this:

for i in 1 to Some_Really_Huge_Number
a = Some_Number
loop

you don't see developers thread, even though this example would thread well: Because its a waste of time for very little performance benefit. (At least in this case, you can use OpenMP to automatically parallize this construct, which is an improvement...if using OpenMP)

Seriously, developers are often working unpaid overtime as deadlines approach, trying desperately to quash every remaining major bug prior to some corporate mandated release date, then working more overtime to get the first patch out when customers yell causing a PR nightmare, while being blamed by upper management for the state of the product, despite warning months more development time was needed. (I have stories, lets leave it at that). Then I get people who complain I don't use the latest and greatest CPU opcodes to squeeze an extra .01% performance in an application that is not performance sensitive!
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860

I have stated several times some possible solutions, but you seem to just want to ignore all of them. Even my initial statement was based on Ivy bridge release data, well before we had any clue what haswell was about. Looks like haswell is designed to reduce power at all costs.

what we don't know: how much power is reduced at 3.6 ghz? all they want to brag about is their ultra mobile power reduction.

the only wishful thinking around here is you wishing everyone that doesn't worship Intel is a complete moron.

secondly I never limited this to Intel. just happens that Intel will get there first. Imagine the horror of amd's 125W cpu on 14nm.
 
Status
Not open for further replies.