AMD CPU speculation... and expert conjecture

Page 91 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Since there is no way any AMD solution or step will ever appease you, maybe they should just go out of business, AMD's position to some people is one of continual bleakness. You say not trying, how is having a diversified product line and a actual marketing plan not trying. Nvidia and Intel but for this case Nvidia are in the position where they can bin excess silicon at lower cost and the losses will not be as hard felt, but in any principle of business if you are selling at a loss then how is that sound. That is like our business selling components off at bellow VAT pricing, sure the first 6 maybe 12 months will be fine but the 12-24 month will start to see us having debts forcing liquidation, same principle, Nvidia priced its 660 family in the 250-350 bracket, selling that at $150-180 is a loss.

Because you gain market share, and once they pick a brand, customers are typically very slow to switch to another one. Plus, it hurts their rivals ability to compete, which will eventually cause them to release sup-par products, allowing for greater profit (for NVIDIA) later on.

In short: Losses now to make the money back (and then some) later.
 

aren't you the one who said (repeatedly) power consumption doesn't matter? look at you trying to use power efficiency to champion a gfx card. :D you finally going green(geddit?), again? :p disclosure - i'd never choose 192bit cut down gk106 over 256 bit pitcairn... scratch that, never gk106 over pitcairn. but it's not about my preference.
"removing the 7750, 7770" - what does that mean? are those gonna be phased out as well? that's new.
7790 is a good gpu on it's own, but 7850 1gb is close enough in price so it's better. 7790's "85% performance" is delivered at Half of the bandwidth. that's why i called it crippled. for example, radeon hd 6850 was less than $150 and it had 256bit bus.
right now, 7790 kinda has nowhere to go but south $150. it's not even been launched yet it's competition has already started selling along with non-ref versions. 1gb 7850s are available in retail as well, so those are clear favorites. but by the time 7790 does actually become available (around april), it'll have to contend with as-cheap-but-higher-performing $150 gtx 650tiboost 1gb, if nvidia's pricing holds true (it did for 2gb version). moreover, non ref. versions of 7790 will likely sell for over $150, possibly up to $180. that's why 7850 1gb's phasing out in favor of 7790's sole presense in that price bracket($150-180) is a bad idea imo.

again with the overexaggeration... :no: 'appeasing' me has nothing to do with amd's gain or loss. :)

diversified is one thing, like developing gddr5 and hsa while selling mobile apus. but designing and selling 3 types of apus (trinity class, kabini and temash class) for 2-3 different platforms (dt and mobile) - all aimed at the same sub $150 segment along with designing and selling cpus aimed at sub $150 market and mid range dt while designing and selling (dt and mobile) gpus from 2 different gcn versions is not diversifying, it's stretching oneself thin. that's before providing software support all those products and amd's own financial resources.

as far as i can remember, gk106 cards have always been available so yields don't seem to be an issue. if yields were bad, then selling lower binned gpus at even lower price may be lossy. i guess quarterly earnings reports will say what i need to know.

edit: i've been waiting to use that stupid pun ever since you started talking about power consumption. this seemed to be a good time to slip that into. teeheehee :pt1cable:
 

hcl123

Honorable
Mar 18, 2013
425
0
10,780


That is very interesting.

By the logos it seems AMD is really intentioned to get into the DRAM and SSD markets. After all they invented GDDR5( yes, including some IP from Rambus for which they payed a fortune). Perhaps they can came out with a General DDR6... meaning not only for graphics but generic applicable for DIMMS to... and for SSD perhaps they could come out with a better controller...

This will push for Intel for sure... and finally some high competition in a market where Sandforce and the obsolete JEDEC standards have imperated.
 

hcl123

Honorable
Mar 18, 2013
425
0
10,780


i'm hoping they would jump into 28nm FD-SOI from STMicro origin but fabbed at GF. This could be very interesting for competition sake.
http://www.electronicsweekly.com/articles/11/06/2012/53867/globalfoundries-opens-up-28nm-20nm-fd-soi-process-to-all-comers.htm

And all things considered it could be cheaper than FinFET on plain bulk
http://www.advancedsubstratenews.com/2012/11/ibs-study-concludes-fd-soi-most-cost-effective-technology-choice-at-28nm-and-20nm/

And provide quite better performance for low medium power(example)
http://www.advancedsubstratenews.com/2013/02/fd-soi-arm-based-smartphone-chip-hitting-3ghz-in-barcelona-but-wait-its-the-low-active-standby-power-0-6v-for-1ghz-thats-really-amazing/

Perhaps there the reason they payed GF a fortune to end any "exclusivity" supply deal... perhaps in the near future (maybe already in 2014 at 28nm FD-SOI) we could see top GPGPUs chips at FD-SOI(its cheaper than 28/20nm bulk anyway)... matter of fact 28nm FD-SOI in the STMicro gate-first modality is equivalent to 26nm bulk gate-last of TSMC, they could get 10 to 15% shrink, and best of all the potential for 1.5ghz GPU chips ( this until the jump to 20nm FD-SOI or a direct jump to 14nm/20nm(BEOL) finfet)

APUs on 20nm FD-SOI would be a killer even for 14nm finfet counterparts... back-gate body biasing seems to be getting a killer feature for low medium power(which is almost impossible for finfet)... node size is not even half of the story now, FD-SOI is turning out way much better than anyone expected, this has the potential for being much better than intel bulk techs...


 

truegenius

Distinguished
BANNED
^
but faster clock speed will esult in more bandwidth and this increased clock speed will minimize the effect of latency :??: ?

after some benchmarks in maxmemm i found that speed/latency results in around same latency
example 1333/7 is approximately equal to 1600/9 (speed/latency) in latency but latter have more bandwidth with around same overall latency :??:

edit: i've been waiting to use that stupid pun ever since you started talking about power consumption. this seemed to be a good time to slip that into. teeheehee
:lol:
 

hcl123

Honorable
Mar 18, 2013
425
0
10,780


They already have Hypertransport at 4ghz or 8GT/s links(double of what you point). It was supposed to get into this first Vishera iteration but it seems it didn't. *Hypertransport* is essential for the HSA vision, matter of fact the HSA only hardware standard published so far is AMD IOMMUv2 which is based on HT... matter of fact all chips since long, including all recent APUs with PCIe connections, are HT based in their internal Xbar, all those chips from Server to DT to APUs have for their internal Xbar interconnect a sophisticated HT switch(work pioneered by Jim Keller long ago, at least the patented IP for this wok has his name allover)



I think they could fit 3 modules 6 cores in those APUs. After all those will have double memory controllers or 256bit pad requirement, which would make them already bigger than the Trinity chips, no matter if the 28nm FD-SOI, *IF happens*, represents a comparative shrink( 28nm bulk GF certainly will be bigger).

Yet the FX could go a different direction... the SteamRoller for APU have only one FlexFPU which is 30% smaller than previous, making a module smaller than in Vishera/BD even if other features and structures are added/improved, simply because the FPU was the single largest structure on those modules. I think that for FX they could put 2 FLexFPU per module or one per core... at least in FP power this represents a ~"doubling", and it could be quite ahead of Haswell on FP power(Integer is a different story). APUs will count much more on the GPGPU factor, so only one FPU, but server/DT it could be different.
 

hcl123

Honorable
Mar 18, 2013
425
0
10,780


heck! it was already done long ago for mobile GPUs (replace the GPU with an APU, adding the *normal* DIMM connection requirement) SEE THE IMAGE...
http://www.techpowerup.com/img/11-05-03/17a.jpg

Matter of fact Intel cristalwell is similar or even more sophisticated like a more SRAM like interface, everything will be "on socket", no need for more slots on more mobo...


 


It was AMD running rampant at the $100-$300 bracket that made Nvidia drop a part they never intended to sell so cheap available in a budget segment. This will invariably eat more into the GTX660 family, albeit I do agree it was needed, with Radeons getting stronger by the driver, factory clocked and no reference PCB's saw Radeons gaining ground in the gaming segment. Nvidia reacted with the Titan and 650ti Boost, both are good products but both represent Nvidia getting desperate.



1) Always said power matters in the budget segment where a lot of users dwell.

2) Tamesh, Kabini, Richland, yes all APU's but represent complete different markets, Consoles, Graphics cards, Solid States, Memory, Server grade ULV APU's, Professional cards, Skyline GPU's they are all different products, neither have anything to do with spreading thin. Again AMD have had the busiest product release period in years and actually has been marketed very well.

3) Looking at all things when quarterlies come I am sure AMD would have registered a positive value hence the rise in its market rating since final report last term.

Kaveri is very important to AMD and my reasoning, providing budget builders with performance of existing entry level gaming rigs ie: i3 + 650ti/7790 or FX 4300 + 650ti/7790 on die, throwing in Dual Graphics together with the candy store of feature the A platform offers could altogether render Intel and Nvidia irrelevant at the sub $500 builders market, which probably represents about 55% of the PC gamers market. This is why I believe AMD have kept this like fort knox, it could be the proverbial float like a butterfly sting like a bee. 6 Years ago this was AMD's vision now its time to take it to a new level.

From what I have seen, read, heard or consulted about all APU's will feature DIMM's and integrated GDDR5 on die, I would suppose that would mean shortened data paths to feed the iGPU serious bandwidth, do that and performance will be monumental. x86 component uses the main system RAM. If AMD didn't think it would work they wouldn't have done it, this isn't disneyland, believe it or not this has been a project for some time and some very gifted people are involved, instead of "ermagherd its going to fail" lets just wait and see. I have tested AMD AMP profile RAM and going from XMP to AMP is worth a few percentiles performance, also dropping timings to 11-11-11-30 or tightening to 8-9-8 has absolutely no effect at all, throwing the speed into it on the other hand sees around 15% performance per step, ie: 1066-1333-1600-1866-2133-2400

 

hcl123

Honorable
Mar 18, 2013
425
0
10,780


AFAIK Kabini and Temash are the same chip... one pushed lower power with some features/cores fused off, the other variant pushed to higher clock.

 

hcl123

Honorable
Mar 18, 2013
425
0
10,780


Yes speed can compensate well worst timings. For a CPU on a GDDR interface you also have lots of cache tricks to compensate(prefetch into, block pinning, adaptive replacement protocol)

 
wider slower narrower faster
add latency
I have seen a few posibilities as to how the access can change, which was prohibitive in the past as being too costly, but stacking et al is part of the answer here.
Having a multi MC with 2 types of ram Ive seen as well.
Ive also heard the console makers want/see unified memory down the road....
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


Ultrabooks don't seem to be doing very well.

I see all types of these micro PC books. Kabini/Temash would take these to a new level at the same price point or lower.

56-173-044-TS


56-119-050-Z01


 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


There's already a JEDEC standard for a DDR4 or GDDR5 SODIMM. 256 pins.

http://www.jedec.org/standards-documents/results/ddr4

Registration - DDR4 and GDDR5M Small Outline Dual Inline Memory Module (SODIMM), 256 pin, 0.50mm pitch Socket Outline Item No. 14-145



The question is does AMD have the funds to have someone manufacture these in volume, or would an OEM take on that risk? They could start selling the SoCs bundled with a pair of GDDR5 SODIMMs.


 

yeah. intel won't lower cpu prices and customers won't buy overpriced netbooks. it's not like the target demo knows tech. details about ultrabooks. those people are buying tablets, phablets and smartphones instead.
the ones you linked belong to nettops/barebones afaik, not portable pcs - those are even smaller niche. kabini and temash (and previously brazos 1, 2.0) can fit this formfactor easily. though amd is gonna use them to compete for design wins like surface pro or asus'/lenovo's tablets.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


That would be consistent with AMD's luck. They pay a fortune to break their GF contracts and then GF finally gets a decent process.

 

Blandge

Distinguished
Aug 25, 2011
316
0
18,810


Intel's finFETs are fully depleted.

Planar processors are hell to make fully depleted. In their traditional state, there's a lot of room in the silicon substrate to house errant voltages. You can, however, add an oxide insulator below the source and the drain to create what's called a partially depleted silicon-on-insulator (PDSOI) design.

You can go all the way to fully depleted (FDSOI) without going FinFET by depositing an extremely thin SOI layer on top of the oxide – but reaching full depletion this way is quite expensive. According to Bohr, baking a chip this way adds at least 10 per cent to the total wafer cost.

Tri-Gate, by comparison, is cheap. "Another nice thing about the Tri-Gate devices," Bohr says, "is that they're not that expensive to add. Compared to a planar version of 22 nanometers, Tri-Gate transistors add only about 2 to 3 per cent cost to the finished wafer."

The Tri-Gate way of reaching full depletion is to stick that silicon fin up into the gate, have the inversion layer on both sides and the top of the fin, with the high-k metal oxide of the gate snug against the inversion layer. Presto – no room for nasty voltages to accumulate, plus the larger wrap-around inversion layer and metal-oxide interface allow for more current, and thus better performance

http://www.theregister.co.uk/2011/05/09/intel_tri_gate_analysis/page3.html
 

hcl123

Honorable
Mar 18, 2013
425
0
10,780
But also the Suvolta techs that are labeled "deeply depleted"... yet no "fins"...

No that "depletion story" also doesn't tell the all picture. Besides the deepest the depletion the worst is for high clock... but the better for low power... you just can't have both...

The beauty of FD-SOI as well Suvolta is that it allows body biasing, which permits multi "vt" in a dynamic way for the same chip... and which is almost impossible for finfet...

So body biasing is the best approximation for having low power yet a good dose of high clock ( STMicro 4 core ARM CPU is phone level power yet is close to 3Ghz).

Cherry on top of cake, is that finfet is much more complicated and so more expensive to fab, especially on plain bulk with complex "isolation" techs( for Intel is no big deal cause the pocket is deep). Finfet could be better for high clock(depends on how FD-SOI scales), but even in this IBM is recommending finfet on SOI or finfet on oxide (FOx).
 

Blandge

Distinguished
Aug 25, 2011
316
0
18,810


Bohr actually stated that finFETs are cheaper per wafer than planer FD-SOI.

You are right however that finFET SOI would decrease leakage, but in another interview, Bohr states that Intel initially planned to use finFET with SOI for high volume manufacturing at 22nm, but the benefits were not worth the additional cost per wafer.
 

hcl123

Honorable
Mar 18, 2013
425
0
10,780


ummm... DDR4 and GDDR5 are basically the same pre-fetch interface, but GDDR5 could be better for low power since it doesn't require that "central buffer chip" on a DIMM to ensure the point-to-point topology... i think... which his necessary if you want more than one DIMM per channel...

Basically DDR4 is low level GDDR5 on a "load reduced" like interface.

I think the trend, since most of the DRAM will be "on socket", at least the most pertinent part of it, is that most this implementers will diverge from the JEDEC interfaces.

You can always have "exterior slots" for expansion yet have a good deal of DRAM "on socket" that will serve as Flash is to "accelerate" HDD operations, as in some variants already on market...

Since this DRAM is kind of private, no slots or standards to follow for this expansion will be imperative, but exactly an extension to those standards... i think the sky is the limit lol...

Intel has cristalwell, AMD is more about graphics techs, others follow WideIO/HBM specs with variations... i think DDR4 or DDR3 or whatever for main memory expansion will lose very large part of its importance( MSFT Xbox along with Wii U uses 32MB of SRAM like, which already could be a very good case to make much less relevant if the main memory expansion is DDR3 or 4 or GDDR5... me thinks...).

 

hcl123

Honorable
Mar 18, 2013
425
0
10,780


No the wafers are notoriously cheaper for plain bulk, but the tech OTOH is notoriously more expensive (specially if in this numbers you include "variability").

Bohr is intel, and intel for all purposes is also already a foundry with costumers and wanting more... so there is a lot of propaganda going on, and more in the future from any side...

Intel on SOI... it should had been already for 22nm as many experts told, but i think it will be only after the 14nm, because *they have no other choice*, that is, good isolation techs for smaller nodes than 22nm is getting prohibitively complicated and expensive(specially including the results of "variability").

Intel could had been on SOI long ago... but if they had been a leading house on SOI wafer making... its not only a question of NIH, but also a question of cost, availability, **CONTROL** and volume...

Volume is here already with 3 suppliers for big volume( which was always the main concern)... the rest of the bulk vs SOI debate has a lot of propaganda and bickering... SOI is notoriously better (proved behind doubt, but "reasonable" depends on the POV of the interests of any particular implementer lol).

[EDIT: also putting a "SOI" wafer or a "bulk" wafer on those fab machines, by itself also doesn't mean much, you got to set your targets related to cost, power, yields and clocks... and any of those can be in conflict with the others, push one up can mean pull the others down... it maybe that your targets are meet with any of the wafer type, but in general clocks goes hand in hand with power and yields with costs. The main beauty of FD-SOI is that permits multi targets on the same chip(multi vt)...well a good approximation in any case... usually you'd have to have 2 chips, one targeting low power the other higher clocks, and so this has the positive side of lower costs a good deal, trowing the price of the wafer to the irrelevant side... ]

 
Also, if you look at the pie chart from my link, youll notice afew things there as well.
Growth in certain segments, 3x is to be expected.
This doesnt mean however the rest of the pie has shrink, only allows for a more timely and aggressive when needed approach.
This allows them some leeway as to when product goes out the door, when its ready, not just to have it our there, as we have seen before, thus the aggressiveness.
The new leadership is making changes, and I believe are for the better
 
it took them 5 days to translate that?! :O :p
anyway, looks like amd will put more emphasis on non-'traditional' computing. i am guessing they mean desktop x86 market. it also looks like amd expects console deals to pay off in a big way.
 
Status
Not open for further replies.