AMD CPU speculation... and expert conjecture

Page 394 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


There was a rumor of it but most people laughed it off.
 


This just shows exactly how bad GF is doing ... 28nm soi in 2012, here it is nearly 2014 and no sign of it at all. 28nm HP available in 2011 ... In July 2012, GF vowed to ramp up 28nm quickly ... http://www.xbitlabs.com/news/other/display/20120705231600_Globalfoundries_Vows_to_Ramp_28nm_Production_Quickly.html

Even though Globalfoundries will be late with its 28nm process technology compared to its closest rivals

wait ... it was available in 2011 ... but late in july 2012 ... and now its 2013 and we are finally going to see a product?

Even Qualcomm was supposed to be using GF's 28nm but all reports are that they only have Samsung and UMC making the chips, TSMC couldn't meet their demands. On top of all this GF wants 20nm by the end of 2013 ... and 14nm by mid 2014 ...

Ok so here is a theory on the timeline, in 2011, supposedly GF will have a SOI production, AMD sais nothing. In 2012, GF realizes they can't get 28nm SOI working and notifies AMD. AMD ... wtf ... lets desgin with another foundry. AMD works on bulk 28nm. In early 2013 GF sais "oh hey, we can get 28nm FD-SOI".

The true unknown: was it GF or AMD who outed SOI first?

As for the comment about ARM, I was only bringing it up for relevance in his viewpoints. If he isn't accurate about that, how can he be accurate about "AMD decided on their own to go bulk".

In the end of all things, GF SUCKS and AMD can't get out of their contract, a contract that may end up killing AMD.
 
since this thread is approaching 10k and i have nothing worthwhile to add (typical!!), i decided to circle back to (surprisingly on-topic \o/) a comment from the first page of the thread.
i forgot if i'd known about jaguar being used in the new consoles back then.
 


In my BSN* article I predicted Kaveri would match an i5 2500k without HSA/MANTLE. Since, I got benchmark data of a sample of real Kaveri compared to a 2500k. I am still analyzing the data, but at this time I can say that the data seems to confirm that Kaveri matches an i5 2500k without HSA/MANTLE.

I will write an update to my article soon.
 


I don`t know much about the Numbers from Kaveri, but i was also expecting Kaveri be as fast as an i5 in CPU workloads... i really hope it is, i want to skip the FX line completely.
 


That seems awfully optimistic. I cannot architecturally see that happening. May I ask what are your grounds for making those claims (besides synthetic benchmarks.)
 
tracker: too hard to say, since the cpu core and the silicon has undergone some changes. however, basic performance targets should hold, more or less. that is, amd's targeted 15% more perf/watt. it might finally bring stock per-core performance very close to sandy bridge level. as for multithread, it'll be slower than fx6000 in parallel tasks like 2 pass video editing but faster than a10 6800k's cpu. hard to put a hard number on the improvement yet (i.e. all speculation). wait for the review.
 


Some say i5 2500k, some say little better than Richland like about 10%.

AMD claims up to 20% faster than Richland and it seems to bet be about 20-25% slower than the i5 2500k.

Assuming AMD indeed manages 20% increase from Richland to Kaveri, it will in fact make it competitive Against the i5 2500k, which is basically an Early 2011 Product... if indeed is 20% better than Richland, it took AMD 3 years get to that Performance.

Kaveri does not even needs to match 100% of the i5 2500k Performance, as long as it`s not 10% or more Slower it all be good news for Kaveri, if not... it will be yet another fail to AMD thanks to GlobalFoundries Vampiric abilities.
 

in terms of design, it might be possible. but in real life, no. the fx8350 die is quite big already. tacking on a 512 core igpu will not make it any smaller. big dies tend to have low yields i.e. possibly higher number of defective products per wafer. that's long way down though, amd would have to integrate major part of the northbridge first.
 


The thing is already big and hot enough... it is possible on a much much smaller process shrink, not possible at this time, but the idea of Kaveri was to have 1.2Teraflops on a 6 Core, GDDR5 and i think 768 GCN... thanks to GlobalScrewndries that was not possible.
 


As I mentioned before AMD had early plans to make something as that: An ultra-high-performance 200-250W APU with 8-core BD and an iGPU of ~10000 GFLOP of performance, but I am said that the project was parked due to lack of funds.
 
Personally, I would like to see AMD come out with a hex-core APU equipped with a weaker (and therefore smaller) igpu. I would love to set up a Linux machine running a Windows VM powered by 4 cores and a discrete card while the igpu powered the Linux OS.

In general, I appreciate the potential of the APUs to provide moderate gaming performance and moderate general performance, but I would rather have a strong cpu and a weak igpu. Even the weak igpu can still provide good benefits for HSA stuff, but the stronger cpu would give the computer a sturdier foundation when confronted with traditional (and legacy) software.
 


Terrible idea!
 


HSA will still work for legacy software.

Also, how will a GPU execute x86 instructions? I don't think you know how these things function. No offence.

 


The same way an X86 CPU can run ARM programs or Console emulators.

Plus, I remember reading that GCN was made to actually be able to process some X86 instructions with not much hassle.

Cheers! 😛
 
Don't forget one of the smart things AMD did with Mantle was to provide better support for asymmetric dual GPUs, so that the iGPU of an APU can still contribute to system performance even when a far more powerful dGPU is installed. In theory, anyway. It might mean Mantle titles will play much bettern on a dual graphics A10 laptop than an Optimus equivalent.
 


Well emulators are entirely different.

Though I had no clue there was something that converted x86 to a graphical API language. Could you point me in the right direction?
 


GCN is basically what Intel wanted to do with Larrabee but never could. The GPU race right now is kind of like things were back in regards to Athlon 64 and Pentium 4, where AMD was doing something Intel couldn't. Except this time Intel doesn't have a Pentium 4 to compete with, they have a $600+ Pentium 1.

 
Status
Not open for further replies.