News Nvidia Tackles Chipmaking Process, Claims 40X Speed Up with cuLitho

It is not clear NVDA lithographic is any better than Intel's. Don't expect any new design will result in cheaper process, it always costs more, from past experience.
 
It is not clear NVDA lithographic is any better than Intel's. Don't expect any new design will result in cheaper process, it always costs more, from past experience.
If you reduce the calendar cost of making computational lithography masks by 40X, it does save millions of dollars on the tape-out process and opens the doors for higher-precision optical modelling which should improve yields at even smaller detail resolution using an otherwise fundamentally unchanged fab process. Being able to pump new masks in a day instead of weeks also means much faster design iterations from first silicon to production, which reduces the amount of time engineers have to fill doing other stuff while waiting for silicon, which can itself reduce the amount of interim work that may get thrown away when silicon doesn't behave exactly as simulations predicted.

Of course, based on the last three years, there is a practically zero chance Nvidia will pass any of the savings to consumers.
 
40 times faster that something that was obsolete 10 years ago?
Doesn't everyone doing this kind of work use GPU's or a custom ASIC to do this array processing?
It's a marketing slide. It could be faster than anything else out there or it might be the real reason that Intel is developing their own GPU family.
 
So that means cheaper production and lower prices for consumers? Right?

Well not right away because first you have to earn back the cost of 500 H100 at $33,000 - $38,000 a pop or an initial investment of $16.5 million - $19 million and that doesn't begin to count the other system hardware, manpower to set it up, manpower to run it or the cost of electricity .... In other words Return on Investment will take a couple of years

What it actually means is the Consumer Market will have to compete against the Commercial Market for production time at TSMC .... If demand is high for Nvidia's high margin commercial products and right now it definitely is because of things like this and GPT Ai then Nvidia really has no choice but to move production time over from the Consumer products to the Commercial products and if they are going to release Consumer products the margins will have to be high or it's just not worth the manufacturing time.

Put yourself in their shoes ... You have two products one nets you $1000 in profit and the other nets you $2000 but you only have the capacity to make 1000 total units. Are you going to mainly make the $1000 profit units or the $2000 profit units?
 
Last edited:
  • Like
Reactions: Sluggotg
40 times faster that something that was obsolete 10 years ago?
Computational lithography at ~3nm is practically atomic-scale reverse-raytracing to compute the lens properties needed to convert a given EUV light field into a specific projected pattern. EUV to enable this sort of resolution didn't exist 10 years ago and the amount of compute power necessary to do computational lithography at that resolution wouldn't have been economically viable. It may not even have been feasible at all until recently due to VRAM limitations.
 
  • Like
Reactions: Sluggotg
New techniques have emerged that now allow etching features smaller than the wavelength of the light used to create them.
I'm pretty sure that's not true. EUV is like 10 nm, right? From what I've gleaned, the smallest actual features on modern nodes are several times that size.

What's ironic about this is that it will almost certainly benefit AMD.

So that means cheaper production and lower prices for consumers? Right?
Even if the price stays the same, just shortening the time to market is going to be extremely valuable.

Well not right away because first you have to earn back the cost of 500 H100 at $33,000 - $38,000 a pop or an initial investment of $16.5 million - $19 million and that doesn't begin to count the other system hardware, manpower to set it up, manpower to run it or the cost of electricity .... In other words Return on Investment will take a couple of years
Who said anything about buying the hardware? Nvidia will rent you time on their cloud, I'm sure. If not, then probably the partner companies mentioned in the article would be the ones to buy the GPUs, as an added value for customers. Of course, it'll cost more for said customers, if they want the fast-turnaround option.

Put yourself in their shoes ... You have two products one nets you $1000 in profit and the other nets you $2000
Your numbers are way off. H100 sells for like $18k, so their margins are probably at least half that. And they're certainly not making $1k profit on a RTX 4090.
 
Last edited:
I'm pretty sure that's not true. EUV is like 10 nm, right? From what I've gleaned, the smallest actual features on modern nodes are several times that size.
There seems to be something missing or misphrased at the end there: I hope your "several times that size" was meant as "several times smaller" as even Intel's 14nm process which used 193nm DUV had some sub-10nm features such as 8nm fin width. Add the oxide layer between the fin and gate along with the gate itself and you get the 34nm fin pitch.

EUV is more of the same using a 13.5nm light source to reduce the amount of multi-patterning, associated costs and yield issues.
 
If you reduce the calendar cost of making computational lithography masks by 40X, it does save millions of dollars on the tape-out process and opens the doors for higher-precision optical modelling which should improve yields at even smaller detail resolution using an otherwise fundamentally unchanged fab process. Being able to pump new masks in a day instead of weeks also means much faster design iterations from first silicon to production, which reduces the amount of time engineers have to fill doing other stuff while waiting for silicon, which can itself reduce the amount of interim work that may get thrown away when silicon doesn't behave exactly as simulations predicted.

Of course, based on the last three years, there is a practically zero chance Nvidia will pass any of the savings to consumers.

True but 40x of what? I suspect Intel is pretty good at this and the 40X is comparing with working with TSMC specifically.
 
You would know if you read the article. 40X faster than a 40k server farm at going from design files to computational lithography masks, going from weeks to hours.

They are just saying they are comparing this to their own in-house stuff I suppose which could already be 40x slower than Intel or AMD's approach. There really is no way to tell if we should be impressed or not.
 
They are just saying they are comparing this to their own in-house stuff I suppose which could already be 40x slower than Intel or AMD's approach. There really is no way to tell if we should be impressed or not.
Chip companies design at the RTL level. Some go on to do custom layout. Even then, Production of the photomasks is still probably about 2 steps removed from them. Nvidia wouldn't normally have had reason to implement their own mask computation, but probably got frustrated by delays in this stage, when trying to get their latest designs and respins fabbed.

Whenever you read about research by Nvidia, you can nearly always find details on their blog:



If this were a research paper, they would say exactly what they used as a reference point. Because it's more of a product or service announcement, I think, there doesn't seem to be a paper associated with it, though you can find details and watch the GTC session here:



On that page, they do state that this was developed in conjunction with industry partners. The bottom of the page features quotes by CEOs of: ASML, Synopsis, and TSMC. It doesn't get any bigger than that.

You're right to mistrust vendor-supplied performance claims. However, when it comes to actual research, you should always expect to find answers, so you needn't automatically jump to the worst-case assumption. In this case, it's not research, but rather a situation where these companies would have no reason to partner with Nvidia, if cuLitho didn't solve a real pain point for them.