News US bans sales of 14nm and 16nm chips with over 30 billion transistors to China

I was a little curious to see just how large 30b transistors on 14nm would be....
AMD EPYC 7601: 4.8b (GF 14nm), 213mm².
Intel Xeon E5-2699A v4: 7.2b (Intel 14nm), 456mm²

Which gives a result of anywhere from 1330mm² to 1900mm²
I somehow doubt a chip that size would be feasible, unless it's chopped up into chiplets.
Which brings me to my other point... what about chiplets?
 
This strategy isn't working. I am against a lot of things communist China does, but this strategy is just giving away our lead by massively incentivizing onshoring all their chip development. We are doing the opposite of what they did and acting like it will work out well. It's very dumb.
 
This strategy isn't working. I am against a lot of things communist China does, but this strategy is just giving away our lead by massively incentivizing onshoring all their chip development. We are doing the opposite of what they did and acting like it will work out well. It's very dumb.
Actually, you don't know if it's not working. One, the incentive to onshore their R&D has ALWAYS been there (look at other industries -- software, auto, GPS etc.). But it's because Two) developing chips isn't easy (or the Chinese, Russian, or Indian would've made much more progress by now). Three, allowing China access to high performance AI chips may help them accelerate their chip design, like Nvidia or AMD use AI to help with theirs. And Four) slowing the Chinese's development in this space gives the US time to plan how to deal with China when they pass the threshold of having sufficiently powerful AI infrastructure for military use.

Let me also add one thing since Nvidia was furious enough to publish their criticism on their corporate blog: Jensen Huang has no reason to care about American competitiveness. Nvidia has too much money, talent, market influence, and partners to be capped by this. He uses that as an excuse to show support for the policy position he prefers (which would benefit his shareholders and him greatly).
 
Last edited:
Actually, you don't know if it's not working. One, the incentive to onshore their R&D has ALWAYS been there (look at other industries -- software, auto, GPS etc.). But it's because Two) developing chips isn't easy (or the Chinese, Russian, or Indian would've made much more progress by now). Three, allowing China access to high performance AI chips may help them accelerate their chip design, like Nvidia or AMD use AI to help with theirs. And Four) slowing the Chinese's development in this space gives the US time to plan how to deal with China when they pass the threshold of having sufficiently powerful AI infrastructure for military use.

Let me also add one thing since Nvidia was furious enough to publish their criticism on their corporate blog: Jensen Huang has no reason to care about American competitiveness. Nvidia has too much money, talent, market influence, and partners to be capped by this. He uses that as an excuse to show support for the policy position he prefers (which would benefit his shareholders and him greatly).
There is plenty of evidence that since we have done this we have increased their investment in local development of chip creation. Also that they have gotten around these limitations in large quantities. It's Russian oil sanctions all over again. On its face a good idea but the unintended consequences long term are worse

edit: https://www.tomshardware.com/tech-i...for-this-nvidia-rivals-ai-processors-explodes

Evidence it is not working because it incentivizes onshoring right there. We are doing the opposite of what China did to us economically.
 
I was a little curious to see just how large 30b transistors on 14nm would be....
AMD EPYC 7601: 4.8b (GF 14nm), 213mm².
Intel Xeon E5-2699A v4: 7.2b (Intel 14nm), 456mm²

Which gives a result of anywhere from 1330mm² to 1900mm²
I somehow doubt a chip that size would be feasible, unless it's chopped up into chiplets.
Which brings me to my other point... what about chiplets?
You forgot about Cerebras and similar wafer-sized or multi-reticle modules.
 
There is plenty of evidence that since we have done this we have increased their investment in local development of chip creation.
Nah, it's definitely working. It basically halted the progress of their leading-edge nodes. Using equipment they bought before the sanctions, they can do multi-patterning to get a 5 nm class node, but that's going to tie up their limited leading-edge production capacity and probably they'd rather just use it for 7 nm wafers.

If you don't think about it too hard, you'd just see "more investment = failed policy". However, what China was doing is buying leading edge hardware & software tools from western companies and just focusing most of their resources on using them. Now, they're having to burn lots of time, money, and effort re-creating all of those technologies internally, which they didn't want to have to do right now.

It probably would've happened sooner or later, but this is forcing them to pause their progress for several years, until they become self-sufficient. By that point, they'll be at least 5 years behind everyone else and having to play catch-up, which is probably going to take at least another 5 years. It's not just me saying that:

edit: https://www.tomshardware.com/tech-i...for-this-nvidia-rivals-ai-processors-explodes

Evidence it is not working because it incentivizes onshoring right there. We are doing the opposite of what China did to us economically.
LOL. Yes, of course they're going to buy whatever they can get. But, did you actually look at the specs of those products? They're no match for what they were getting from Nvidia, and that's if they actually perform remotely like their specifications. Too often, that isn't the case. Most famously with the Moore Threads S80: