They are just saying they are comparing this to their own in-house stuff I suppose which could already be 40x slower than Intel or AMD's approach. There really is no way to tell if we should be impressed or not.
Chip companies design at the RTL level. Some go on to do custom layout. Even then, Production of the photomasks is still probably about 2 steps removed from them. Nvidia wouldn't normally have had reason to implement their own mask computation, but probably got frustrated by delays in this stage, when trying to get their latest designs and respins fabbed.
Whenever you read about research by Nvidia, you can nearly always find details on their blog:
News and tutorials for developers, data scientists…
developer.nvidia.com
If this were a research paper, they would say exactly what they used as a reference point. Because it's more of a product or service announcement,
I think, there doesn't seem to be a paper associated with it, though you can find details and watch the GTC session here:
Helps drive silicon scaling, reduce costs, and accelerate technology advancements.
developer.nvidia.com
On that page, they
do state that this was developed in conjunction with industry partners. The bottom of the page features quotes by
CEOs of:
ASML,
Synopsis, and
TSMC. It doesn't get any bigger than that.
You're right to mistrust vendor-supplied performance claims. However, when it comes to actual research, you should always expect to find answers, so you needn't automatically jump to the worst-case assumption. In this case, it's not research, but rather a situation where these companies would have no reason to partner with Nvidia, if cuLitho didn't solve a real pain point for them.