News Nvidia Quietly Rolls-Out GeForce RTX 4090 with New Die

I doubt Nvida would spend the millions of dollars needed to re-tapeout an IC only to integrate ~$1 worth of external circuitry. There must be more to it. Having a new chip and board ID to drop max Vcore by 30mV seems excessive, unless Nvidia expects RMA liabilities to exceed costs or has other things wrapped into that presumed re-spin too.

Exactly this. They found some kind of defect, vulnerability, or something major. If it was only the BOM they would simply have done a super refresh with said changes.
 
Exactly this. They found some kind of defect, vulnerability, or something major. If it was only the BOM they would simply have done a super refresh with said changes.
Doesn't have to be something 'major' as that would definitely have to make it into notices somewhere. Could be more benign stuff like improved yields, such as the new chips being able to hit the same performance with 30mV less Vcore which could pave the way for a 4090 Ti/Super with the old 1.1V Vcore and 10% higher stock clocks.
 
My opinion: I highly doubt it was because of a defect. There's revisions made all the time to gpus. AMD does the same thing. Imagine doing years of testing of a GPU (not a simple device) and then you release it with a defect.
And don't bring up memory and capacitors. Sometimes that's just on the other end with bad quality control.
The GPU itself would be working most likely near perfection before they would consider releasing it to the public.
And we're talking about a company that's dedicated to video graphics. It is their specialty. It's unlikely they're going to have a defect. At least to make a change like this.
It could just simply mean they're using different parts now from different manufacturers and had to make some slight changes to accommodate.
It's funny how people just default to "it must be a defect".
Only an obvious AMD shill would say that. But if I'm wrong then I'm not sure what kind of mind you have.
Simply saying something without any kind of verification is kind of misinformation.
 
  • Like
Reactions: KyaraM and bolweval
Whatever the reason for the revision, the high cost of 4090 means that, for the overwhelming majority of GPU buyers, the change is irrelevant.
I was going to comment something very in-line with yours, haha. Like... "Look, the beach has one less grain on sand in it!".

And adding it to the above posts/comments as well, nVidia is definitely being sus here. Not that they have any obligation to say/disclose anything, but it would still be interesting to know more. Maybe someone can check the naked dies and compare if there's more changes than just power regulation?

I'll try my best to be optimistic and just think of the few bucks people in the market for a 4090 will get to save. Like for a cup of coffee or something, right?

Regards.
 
more. Maybe someone can check the naked dies and compare if there's more changes than just power regulation?
If you don't mind sacrificing two GPUs for science to strip their dies down to signal layers.

The other changes may be too subtle to find unless you know exactly where and what to look for. At the lowest end of the spectrum, it could simply be because TSMC published new primitives libraries and Nvidia decided the gains were worth a refresh.
 
My opinion: I highly doubt it was because of a defect. There's revisions made all the time to gpus. AMD does the same thing. Imagine doing years of testing of a GPU (not a simple device) and then you release it with a defect.
And don't bring up memory and capacitors. Sometimes that's just on the other end with bad quality control.
The GPU itself would be working most likely near perfection before they would consider releasing it to the public.
And we're talking about a company that's dedicated to video graphics. It is their specialty. It's unlikely they're going to have a defect. At least to make a change like this.
It could just simply mean they're using different parts now from different manufacturers and had to make some slight changes to accommodate.
It's funny how people just default to "it must be a defect".
Only an obvious AMD shill would say that. But if I'm wrong then I'm not sure what kind of mind you have.
Simply saying something without any kind of verification is kind of misinformation.

Intel is in the business of making cpu's, and it's obvious you weren't in the industry or around when the pentium FDIV bug happened. https://en.m.wikipedia.org/wiki/Pentium_FDIV_bug
 
My opinion: I highly doubt it was because of a defect. There's revisions made all the time to gpus. AMD does the same thing. Imagine doing years of testing of a GPU (not a simple device) and then you release it with a defect.
And don't bring up memory and capacitors. Sometimes that's just on the other end with bad quality control.
The GPU itself would be working most likely near perfection before they would consider releasing it to the public.
And we're talking about a company that's dedicated to video graphics. It is their specialty. It's unlikely they're going to have a defect. At least to make a change like this.
It could just simply mean they're using different parts now from different manufacturers and had to make some slight changes to accommodate.
It's funny how people just default to "it must be a defect".
Only an obvious AMD shill would say that. But if I'm wrong then I'm not sure what kind of mind you have.
Simply saying something without any kind of verification is kind of misinformation.
NVIDIA would never release defective product then not back it up...like defective nFORCE motherboard chipsets...oh wait....or bad memory on 2080ti's...cringe Hold on hold on wait, they make great laptop chipsets that don't...da*n.....
 
I doubt Nvida would spend the millions of dollars needed to re-tapeout an IC only to integrate ~$1 worth of external circuitry. There must be more to it. Having a new chip and board ID to drop max Vcore by 30mV seems excessive, unless Nvidia expects RMA liabilities to exceed costs or has other things wrapped into that presumed re-spin too.

If the $1 external circuit is difficult to source and holds production of a $1600 board hostage...

Or cannot be obtained with the necessary consistent quality to prevent RMAs of your $1600 boards ...


Are the design tools and their integration with the whole "fab system" making it easier/cheaper to make these incremental changes?

Is the new chip the same size?
 
  • Like
Reactions: KyaraM
If the $1 external circuit is difficult to source and holds production of a $1600 board hostage...

Are the design tools and their integration with the whole "fab system" making it easier/cheaper to make these incremental changes?
Comparators are just about as "jellybeans" a part as you can possibly get, there usually are 10+ different sources of mostly interchangeable parts.

If you want to move one transistor around in a chip, that is still going to be thousands of man-hours in validation work and millions of dollars to produce the new mask sets for every chip layer that has changed. No amount of foundry integration can eliminate the need for new masks after any design change.