[quotemsg=16216474,0,764133]I think the economic viability is a big issue. Sounds like yields per hour, per etching station, will be low...and the cost of each station, high. Everything is harder...the wafer consistency presumably has to be that much better, for example...because the tolerances for irregularities are that much tighter.
And the other question is, is this even the right path to follow? Are there other solution paths that will be better for the medium term? There are parallels with hard drives; rather than increase platter density, they added more platters, and then they developed RAID.[/quotemsg]
[quotemsg=16217562,0,764133][quotemsg=16217225,0,192459]
Meanwhile nvidia and AMD are still using the ancient 28nm....
Because nobody will buy a $10,000.00 video card.[/quotemsg]
While I'll buy that a 22 nm might cost more, I don't see it being *that* much more, in an environment where 22 nm fabrication was mature. You should see notable power consumption reductions and better yields per wafer (eventually) from the smaller size.
That said: I do think the early 7 nm stuff may carry a price premium on this order. In part, because of the significant technical issues, and in part because there isn't enough competition unless AMD can show the technical prowess to reduce to at least 14 nm. I do think we will see 7 nm...eventually....but the time lines may be optimistic.[/quotemsg]
The 22nm process was developed by Intel for CPUs not GPUs. Processes are typically specialized for certain applications by the company, hence why NAND flash RAM is at a different size than CPUs and GPUs would have gone to 20nm not 22nm.
He was also talking about a 7nm GPU which right now the yields of a 7nm CPU/GPU are probably 0 since this is just SRAM not an actual CPU.