Can you really claim that the market will set the prices when AMD and Nvidia are slashing their production orders? This is called deliberately constricting supply which is always going to raise prices. When the two market leaders do it in concert there are often investigations and racketeering charges.
There always was demand for graphics acceleration, except that at first, it wasn't economically feasible beyond character generation and there were no standards whatsoever for vendors or software developers to follow.
Huh? Of course there were standards, at least to the extent that anything in the PC was standardized. IBM defined MDA, CGA, EGA, and VGA. And they all supported more than mere character generation!
Again, you omit much. As part of the above standards, we got BIOS extensions for accessing them and performing simple operations. For drawing directly to the frame buffer, you could just write to the memory-mapped regions. And for the adventurous, you could directly manipulate the registers, as described in books like the classic:
Buy Programmer's Guide to the EGA and VGA Cards on Amazon.com ✓ FREE SHIPPING on qualified orders
www.amazon.com
Then, as we entered the SVGA era and graphics cards started adding nonstandard 2D acceleration features. VESA stepped in to try and organize the chaos with VESA VBE (Video BIOS Extensions), so that not every DOS program had to include custom support for every graphics chip on the market.
Of course, Windows had its own driver layer, and Windows apps used the GDI API. Windows NT actually shipped with native OpenGL support, since Microsoft was trying to position it as a proper workstation OS. Then, as part of Windows 95 Microsoft introduced DirectX (including DirectDraw). Direct 3D wasn't ready in time, so it actually launched in 1996. I forget if DirectShow was included from day 1 or not.
3dfx wasn't even that early. There were already about a half dozen 3D accelerators on the market, when they finally launched. The big deal about 3dfx is that they blew everyone's doors off. John Carmack was quoted as saying something like "3dfx actually delivered the kind of performance everyone else had only been promising." But they were so late that Id Software already finished porting Quake to Rendition's Verite (fun fact: which was the first fully-programmable PC 3D card, as it contained an ARM core), by the time 3dfx launched the first Voodoo card. Of course, with performance like that, a port to 3dfx's GLiDE API was soon to follow...
Another cool fact: SGI actually had a multi-card 3D graphics solution for the PC in 1991, well before any of the commodity 3D solutions hit the market! Not only that, it had a Geometry Engine and yet it took the commodity 3D cards 2-3 generations before they started doing hardware transform & lighting.
Your chronology is off. Nvidia's NV1 beat 3dfx to market by more than a year.
That said, the NV1 was practically garbage, compared to the first Voodoo card. IMO, the NV1 very nearly killed Nvidia, especially with their huge gamble on quadrics. That didn't fit at all well into Direct3D. There's an excellent writeup of it, here:
Fair points. I was trying to point out that Nvidia wasn't the first consumer 3d accelerator and that they didn't cost 50k in the mid 90s. Even the Glint 300SX which was around 1993 wasn't 50K as the poster had suggested.
So, I was wondering how the annualized inflation rate of GPUs compared with that of CPUs. The methodology is pretty rudimentary, but here's what I found.
In 1995 when Nvidia was founded, what did workstation graphics cards cost? A Sun Ultra 1 Creator3D Model 170E was listed for $28k in 1995 (around $50k in today's money), though that's for the whole system rather than just the GPU (which does not appear to have been something listed separately). Then there's the SGI Onyx, which were in the quarter million dollar range (in 1995 money!).
So yeah, even today we're definitely not paying workstation card prices.
Yeah, but in '99 we got the Geforce 256. The first accelerator known as a GPU. I wonder how it would perform against the Sun Ultra? It brought with it HW T+L and AGP to move graphics power forward.
they cant get more profit since the used card also flooded the market.
the price between is too far.
they might end this year with under 10% net profit.
It's like any other tech of that time, be it CPUs or whatever. Moore's Law was very much alive and well, meaning roughly a 10x performance improvement over a period of 5 years. So, no comparison really.
So, I was wondering how the annualized inflation rate of GPUs compared with that of CPUs. The methodology is pretty rudimentary, but here's what I found.
new price = 1.x^(2) where x is the inflation rate? For example between gens:
Original Gen 980ti was ~$650
1080ti was $700
700/650 = x^2 (2 = number of years between releases). If 3 years between releases,use ^ 3. 18 months? Use 1.5 for the power
(log 700/650) = 2 log x
(log 700/650) / 2 = log x
new price = 1.x^(2) where x is the inflation rate? For example between gens:
Original Gen 980ti was ~$650
1080ti was $700
700 = x^2 (2 = number of years between releases). If 3 years between releases,use ^ 3. 18 months? Use 1.5 for the power
(log 700) = 2 log x
(log 700) / 2 = log x
I computed the nth root of the price ratio, where n was the number of years between the launch dates. Then, subtracted 1 and converted to %. I suppose that's probably not the way a finance person would've done it.
BTW, you can copy & paste directly from Excel into the text box and it pastes as a table! Most of the other formatting is lost, however.
No. Nowhere near. I think you're confusing gross margins with net margins-- but chips don't design themselves, you know. Until around late 2016 (when GPUs began to be so popular for AI tasks), NVidia had net margins in the 10-15% range .... sometimes even negative. They're now at 21% margins ...but this year is, by all accounts, going to be far worse than last.
So one of NVidia's primary markets collapsed, along with the wider economy. Despite these headwinds, NVidia set record sales that year, and record profits ... and you consider this evidence to support your claim that their CEO "got stupid" and the company suffered?
There were something like 20 others graphics hardware manufacturers at the time. If Nvidia and 3dfx didn't exist, ATi, S3, SGI, 3DLabs, ArtX, Real3D, etc. would have picked up the slack eventually.
So one of NVidia's primary markets collapsed, along with the wider economy. Despite these headwinds, NVidia set record sales that year, and record profits ... and you consider this evidence to support your claim that their CEO "got stupid" and the company suffered?
Sure. And if NASA hadn't walked a man on the moon, someone else would have picked up the slack eventually. We still give them the credit, though.
The Glint 300SX was out 2 years before the Nvidia NV1 and the Nvidia NV1 flopped. The 3Dfx VooDoo came out a year after the NV1 and 3Dfx was founded in 1994. The "eventually" was basically 12 months if you consider the NV1 as Nvidia's coming out party (which it wasn't).
So did the Glint. More to the point, that chip was solely for the workstation-class graphics market-- not games. The only card I remember it going into the was the Gloria I (?), which was over $1000 (an enormous sum for the time) and had exactly zero game support. 3dlabs didn't really hit the gaming market until they came out with the Permidia, four years later.
Still, if you wish to characterize that chip as Russia's Sputnik to NVidia's Riva TNT moon landing, I have no problem with that.
I guess after he started seeing those bonus checks he thought to himself. FT, I like getting these checks. Plus I can afford a new leather jacket each year. LOL
Anyone who justifies Nvidia's high & horrible prices for cards - I wish you that you will be buying your favorite Nvidia cards for double & triple price more in the future. You totally deserve to be milked by company like Nvidia.
Anyone who justifies Nvidia's high & horrible prices for cards - I wish you that you will be buying your favorite Nvidia cards for double & triple price more in the future. You totally deserve to be milked by company like Nvidia.
We can try to look at whether they can afford to cut prices, on the current generation, apart from whether they misjudged the market and design GPUs that overshot the sweet spot. I do think it's telling that they seem to have no trouble selling every RTX 4090 they can make.
Don't presume everyone who simply tries to understand their pricing is a fan of it. For instance, I doubt there's a profit maximizing way for them to cut 4000-series prices much from where they're at, but I won't be buying a 4000-series card myself.
For instance, I doubt there's a profit maximizing way for them to cut 4000-series prices much from where they're at, but I won't be buying a 4000-series card myself.
The 4060 looks headed towards the $600 price point... pretty sure there is plenty of room for Nvidia to lower that price tag for a ~200sqmm die if it wants to. It doesn't want to since it has no reason to. At least not until the market rejects the price points, shareholders start sweating from dwindling sales and someone gets forced to admit sales are tanking due to prices being way too damn high.
My intent was to say across-the-board cuts. I would definitely agree that outliers like the RTX 4080 should have room for price cuts.
For cuts further down the range... perhaps we'll have to wait until more of the 3000-series inventory burns off. Until then, the 4000-series will probably maintain a premium.
It doesn't want to since it has no reason to. At least not until the market rejects the price points, shareholders start sweating from dwindling sales and someone gets forced to admit sales are tanking due to prices being way too damn high.
The tricky part is that you have to demonstrate a demand-sensitivity to any price cuts that would more than offset the reduced margins through increased volume. During recessionary times, there's certainly a lot of price-sensitivity, but maybe not enough to drive enough additional sales at the high end.
My intent was to say across-the-board cuts. ...For cuts further down the range... perhaps we'll have to wait until more of the 3000-series inventory burns off. Until then, the 4000-series will probably maintain a premium.
While prices will of course drop somewhat eventually, I don't believe we'll ever see again the rapid drops of the past. We're seeing a sea change in the economics of cutting-node chip manufacture, affecting not just NVidia, but everyone. N3E is the first node in history where the majority of its configurations actually have a higher per-transistor cost than its predecessor node. It's too early to know if N2 will be any different, but it seems likely. Expect both AMD and NVidia's next-gen boards to debut at even higher price points than current series.
Now this is an interesting quandary. I was thinking about it myself, as I've been postulating that perhaps AMD and Nvidia opted for larger, higher-priced solutions than the current market is willing to support (having said that, the relative scarcity of the RTX 4090 would argue otherwise).
So, do they focus on trying to optimize their architectures to get better performance per transistor, maybe even cutting back on transistor count in the process? They could still offer performance improvements via clock speed increases and any "IPC" gains they can achieve. Or, maybe they hold at N4, somewhat like what we saw with multiple generations shipping on 28 nm?
The tricky part is that you have to demonstrate a demand-sensitivity to any price cuts that would more than offset the reduced margins through increased volume. During recessionary times, there's certainly a lot of price-sensitivity, but maybe not enough to drive enough additional sales at the high end.
They may worry about "price sensitivity" now but what will it be 3-5 years down the line when game developers look at what GPUs their target audience is using and 60+% of it is using GTX2060s or worse? By pricing the entry-level out of 50+% of the market's budget today, Nvidia and AMD are forcing future game developers to either forfeit half of their potential audience or continue targeting 5-10 years old hardware.
This is going to be fun when AMD and Nvidia announce their next round of driver support cuts for still widely used GPUs like the RX480 and GTX10xx that are still competitive against the hot garbage AMD and Nvidia still try to shovel down people's throats at the low end today for $150+. This may not go well if they do so without providing definitive legitimate successors for the budget crowd.