News Jacketless Jensen Promised "Inexpensive" GPUs Back in 2011

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

lmcnabney

Prominent
Aug 5, 2022
192
190
760
Can you really claim that the market will set the prices when AMD and Nvidia are slashing their production orders? This is called deliberately constricting supply which is always going to raise prices. When the two market leaders do it in concert there are often investigations and racketeering charges.
 

bit_user

Polypheme
Ambassador
There always was demand for graphics acceleration, except that at first, it wasn't economically feasible beyond character generation and there were no standards whatsoever for vendors or software developers to follow.
Huh? Of course there were standards, at least to the extent that anything in the PC was standardized. IBM defined MDA, CGA, EGA, and VGA. And they all supported more than mere character generation!

Then we got the normalization of 2D acceleration that ultimately became DirectDraw on Windows followed by 3D acceleration.
Again, you omit much. As part of the above standards, we got BIOS extensions for accessing them and performing simple operations. For drawing directly to the frame buffer, you could just write to the memory-mapped regions. And for the adventurous, you could directly manipulate the registers, as described in books like the classic:



Then, as we entered the SVGA era and graphics cards started adding nonstandard 2D acceleration features. VESA stepped in to try and organize the chaos with VESA VBE (Video BIOS Extensions), so that not every DOS program had to include custom support for every graphics chip on the market.

Of course, Windows had its own driver layer, and Windows apps used the GDI API. Windows NT actually shipped with native OpenGL support, since Microsoft was trying to position it as a proper workstation OS. Then, as part of Windows 95 Microsoft introduced DirectX (including DirectDraw). Direct 3D wasn't ready in time, so it actually launched in 1996. I forget if DirectShow was included from day 1 or not.

3D acceleration was going to happen regardless of Nvidia and 3dfx. They only get credit for pushing it harder earlier in a bid for early adopter cash.
3dfx wasn't even that early. There were already about a half dozen 3D accelerators on the market, when they finally launched. The big deal about 3dfx is that they blew everyone's doors off. John Carmack was quoted as saying something like "3dfx actually delivered the kind of performance everyone else had only been promising." But they were so late that Id Software already finished porting Quake to Rendition's Verite (fun fact: which was the first fully-programmable PC 3D card, as it contained an ARM core), by the time 3dfx launched the first Voodoo card. Of course, with performance like that, a port to 3dfx's GLiDE API was soon to follow...

Another cool fact: SGI actually had a multi-card 3D graphics solution for the PC in 1991, well before any of the commodity 3D solutions hit the market! Not only that, it had a Geometry Engine and yet it took the commodity 3D cards 2-3 generations before they started doing hardware transform & lighting.

 
Last edited:
  • Like
Reactions: JamesJones44

JamesJones44

Reputable
Jan 22, 2021
664
596
5,760
Your chronology is off. Nvidia's NV1 beat 3dfx to market by more than a year.

That said, the NV1 was practically garbage, compared to the first Voodoo card. IMO, the NV1 very nearly killed Nvidia, especially with their huge gamble on quadrics. That didn't fit at all well into Direct3D. There's an excellent writeup of it, here:



And see the gallery, here:



Lucky for them, they learned from their mistakes and lived to fight another day.

Fair points. I was trying to point out that Nvidia wasn't the first consumer 3d accelerator and that they didn't cost 50k in the mid 90s. Even the Glint 300SX which was around 1993 wasn't 50K as the poster had suggested.
 
  • Like
Reactions: bit_user

bit_user

Polypheme
Ambassador
So, I was wondering how the annualized inflation rate of GPUs compared with that of CPUs. The methodology is pretty rudimentary, but here's what I found.

Nvidia GPUs:

ModelLaunch DateLaunch Price
GTX 1080 Ti
2017-03-05​
$699​
RTX 4090
2022-10-12​
$1,599​
Diff (years) or price ratio
5.60​
2.29​
Inflation, annualized
15.91%


AMD GPUs:

ModelLaunch DateLaunch Price
RX Vega 64
2017-08-14​
$499​
RX 7900 XTX
2022-12-13​
$999​
Diff (years) or price ratio
5.33​
2.00​
Inflation, annualized
13.91%


Intel CPUs:

ModelLaunch DateLaunch Price
Core i7-7700K
2017-01-03​
$350​
Core i9-13900K
2022-09-27​
$589​
Diff (years) or price ratio
5.73​
1.68​
Inflation, annualized
9.51%


AMD CPUs:

ModelLaunch DateLaunch Price
Ryzen 7 1800X
2017-03-07​
$499​
Ryzen 9 7950X
2022-07-27​
$699​
Diff (years) or price ratio
5.39​
1.40​
Inflation, annualized
6.45%
 
Last edited:
  • Like
Reactions: MetalScythe
In 1995 when Nvidia was founded, what did workstation graphics cards cost? A Sun Ultra 1 Creator3D Model 170E was listed for $28k in 1995 (around $50k in today's money), though that's for the whole system rather than just the GPU (which does not appear to have been something listed separately). Then there's the SGI Onyx, which were in the quarter million dollar range (in 1995 money!).
So yeah, even today we're definitely not paying workstation card prices.

Yeah, but in '99 we got the Geforce 256. The first accelerator known as a GPU. I wonder how it would perform against the Sun Ultra? It brought with it HW T+L and AGP to move graphics power forward.
 

korekan

Commendable
Jan 15, 2021
86
8
1,535
they cant get more profit since the used card also flooded the market.
the price between is too far.
they might end this year with under 10% net profit.
 

bit_user

Polypheme
Ambassador
Yeah, but in '99 we got the Geforce 256. The first accelerator known as a GPU. I wonder how it would perform against the Sun Ultra?
It's like any other tech of that time, be it CPUs or whatever. Moore's Law was very much alive and well, meaning roughly a 10x performance improvement over a period of 5 years. So, no comparison really.
 
Last edited:
  • Like
Reactions: MetalScythe
So, I was wondering how the annualized inflation rate of GPUs compared with that of CPUs. The methodology is pretty rudimentary, but here's what I found.

Nvidia GPUs:

ModelLaunch DateLaunch Price
GTX 1080 Ti
2017-03-05​
$699​
RTX 4090
2022-10-12​
$1,599​
Diff (years) or price ratio
5.60​
2.29​
Inflation, annualized
15.91%



AMD GPUs:

ModelLaunch DateLaunch Price
RX Vega 64
2017-08-14​
$499​
RX 7900 XTX
2022-12-13​
$999​
Diff (years) or price ratio
5.33​
2.00​
Inflation, annualized
13.91%



Intel CPUs:

ModelLaunch DateLaunch Price
Core i7-7700K
2017-01-03​
$350​
Core i9-13900K
2022-09-27​
$589​
Diff (years) or price ratio
5.73​
1.68​
Inflation, annualized
9.51%



AMD CPUs:

ModelLaunch DateLaunch Price
Ryzen 7 1800X
2017-03-07​
$499​
Ryzen 9 7950X
2022-07-27​
$699​
Diff (years) or price ratio
5.39​
1.40​
Inflation, annualized
6.45%

Did you use....

new price = 1.x^(2) where x is the inflation rate? For example between gens:

Original Gen 980ti was ~$650
1080ti was $700

700/650 = x^2 (2 = number of years between releases). If 3 years between releases,use ^ 3. 18 months? Use 1.5 for the power
(log 700/650) = 2 log x
(log 700/650) / 2 = log x

x = Inverse log((log 700/650)/2)

Inflation % per year = (x-1) * 100
 
Last edited:
  • Like
Reactions: MetalScythe

bit_user

Polypheme
Ambassador
Did you use....

new price = 1.x^(2) where x is the inflation rate? For example between gens:

Original Gen 980ti was ~$650
1080ti was $700

700 = x^2 (2 = number of years between releases). If 3 years between releases,use ^ 3. 18 months? Use 1.5 for the power
(log 700) = 2 log x
(log 700) / 2 = log x

x = Inverse log((log 700)/2)

Inflation % per year = (x-1) * 100
I computed the nth root of the price ratio, where n was the number of years between the launch dates. Then, subtracted 1 and converted to %. I suppose that's probably not the way a finance person would've done it.

BTW, you can copy & paste directly from Excel into the text box and it pastes as a table! Most of the other formatting is lost, however.
 
Last edited:
  • Like
Reactions: MetalScythe

casa_is_cool

Commendable
Oct 20, 2020
8
5
1,515
I love this video. I've watched it at least 3 times over the years.. even last year.

Not really "resurfaced." You could use it as a business case study.
 

casa_is_cool

Commendable
Oct 20, 2020
8
5
1,515
No. Nowhere near. I think you're confusing gross margins with net margins-- but chips don't design themselves, you know. Until around late 2016 (when GPUs began to be so popular for AI tasks), NVidia had net margins in the 10-15% range .... sometimes even negative. They're now at 21% margins ...but this year is, by all accounts, going to be far worse than last.

That's right. Net income was only 12% of Net Revenue in Q3 2022

https://nvidianews.nvidia.com/news/nvidia-announces-financial-results-for-third-quarter-fiscal-2023
 
  • Like
Reactions: Endymio

Endymio

Reputable
BANNED
Aug 3, 2020
725
264
5,270
No, I didn't contradict myself. Different time frames and quarters. When crypto collapsed, along with economy, sales collapsed
So one of NVidia's primary markets collapsed, along with the wider economy. Despite these headwinds, NVidia set record sales that year, and record profits ... and you consider this evidence to support your claim that their CEO "got stupid" and the company suffered?


There were something like 20 others graphics hardware manufacturers at the time. If Nvidia and 3dfx didn't exist, ATi, S3, SGI, 3DLabs, ArtX, Real3D, etc. would have picked up the slack eventually.
Sure. And if NASA hadn't walked a man on the moon, someone else would have picked up the slack eventually. We still give them the credit, though.
 
  • Like
Reactions: bit_user

JamesJones44

Reputable
Jan 22, 2021
664
596
5,760
So one of NVidia's primary markets collapsed, along with the wider economy. Despite these headwinds, NVidia set record sales that year, and record profits ... and you consider this evidence to support your claim that their CEO "got stupid" and the company suffered?


Sure. And if NASA hadn't walked a man on the moon, someone else would have picked up the slack eventually. We still give them the credit, though.

The Glint 300SX was out 2 years before the Nvidia NV1 and the Nvidia NV1 flopped. The 3Dfx VooDoo came out a year after the NV1 and 3Dfx was founded in 1994. The "eventually" was basically 12 months if you consider the NV1 as Nvidia's coming out party (which it wasn't).
 

Endymio

Reputable
BANNED
Aug 3, 2020
725
264
5,270
The Glint 300SX was out 2 years before the Nvidia NV1 and the NV1 flopped.
So did the Glint. More to the point, that chip was solely for the workstation-class graphics market-- not games. The only card I remember it going into the was the Gloria I (?), which was over $1000 (an enormous sum for the time) and had exactly zero game support. 3dlabs didn't really hit the gaming market until they came out with the Permidia, four years later.

Still, if you wish to characterize that chip as Russia's Sputnik to NVidia's Riva TNT moon landing, I have no problem with that.
 
  • Like
Reactions: bit_user

Bamda

Distinguished
Apr 17, 2017
102
36
18,610
I guess after he started seeing those bonus checks he thought to himself. FT, I like getting these checks. Plus I can afford a new leather jacket each year. LOL
 
  • Like
Reactions: bit_user

Dean0919

Honorable
Oct 25, 2017
269
40
10,740
Anyone who justifies Nvidia's high & horrible prices for cards - I wish you that you will be buying your favorite Nvidia cards for double & triple price more in the future. You totally deserve to be milked by company like Nvidia.
 

bit_user

Polypheme
Ambassador
Anyone who justifies Nvidia's high & horrible prices for cards - I wish you that you will be buying your favorite Nvidia cards for double & triple price more in the future. You totally deserve to be milked by company like Nvidia.
We can try to look at whether they can afford to cut prices, on the current generation, apart from whether they misjudged the market and design GPUs that overshot the sweet spot. I do think it's telling that they seem to have no trouble selling every RTX 4090 they can make.

Don't presume everyone who simply tries to understand their pricing is a fan of it. For instance, I doubt there's a profit maximizing way for them to cut 4000-series prices much from where they're at, but I won't be buying a 4000-series card myself.
 

InvalidError

Titan
Moderator
For instance, I doubt there's a profit maximizing way for them to cut 4000-series prices much from where they're at, but I won't be buying a 4000-series card myself.
The 4060 looks headed towards the $600 price point... pretty sure there is plenty of room for Nvidia to lower that price tag for a ~200sqmm die if it wants to. It doesn't want to since it has no reason to. At least not until the market rejects the price points, shareholders start sweating from dwindling sales and someone gets forced to admit sales are tanking due to prices being way too damn high.
 

bit_user

Polypheme
Ambassador
The 4060 looks headed towards the $600 price point... pretty sure there is plenty of room for Nvidia to lower that price tag for a ~200sqmm die
My intent was to say across-the-board cuts. I would definitely agree that outliers like the RTX 4080 should have room for price cuts.

For cuts further down the range... perhaps we'll have to wait until more of the 3000-series inventory burns off. Until then, the 4000-series will probably maintain a premium.

It doesn't want to since it has no reason to. At least not until the market rejects the price points, shareholders start sweating from dwindling sales and someone gets forced to admit sales are tanking due to prices being way too damn high.
The tricky part is that you have to demonstrate a demand-sensitivity to any price cuts that would more than offset the reduced margins through increased volume. During recessionary times, there's certainly a lot of price-sensitivity, but maybe not enough to drive enough additional sales at the high end.
 
  • Like
Reactions: Endymio

Endymio

Reputable
BANNED
Aug 3, 2020
725
264
5,270
My intent was to say across-the-board cuts. ...For cuts further down the range... perhaps we'll have to wait until more of the 3000-series inventory burns off. Until then, the 4000-series will probably maintain a premium.
While prices will of course drop somewhat eventually, I don't believe we'll ever see again the rapid drops of the past. We're seeing a sea change in the economics of cutting-node chip manufacture, affecting not just NVidia, but everyone. N3E is the first node in history where the majority of its configurations actually have a higher per-transistor cost than its predecessor node. It's too early to know if N2 will be any different, but it seems likely. Expect both AMD and NVidia's next-gen boards to debut at even higher price points than current series.
 
  • Like
Reactions: bit_user

bit_user

Polypheme
Ambassador
Expect both AMD and NVidia's next-gen boards to debut at even higher price points than current series.
Now this is an interesting quandary. I was thinking about it myself, as I've been postulating that perhaps AMD and Nvidia opted for larger, higher-priced solutions than the current market is willing to support (having said that, the relative scarcity of the RTX 4090 would argue otherwise).

So, do they focus on trying to optimize their architectures to get better performance per transistor, maybe even cutting back on transistor count in the process? They could still offer performance improvements via clock speed increases and any "IPC" gains they can achieve. Or, maybe they hold at N4, somewhat like what we saw with multiple generations shipping on 28 nm?
 

InvalidError

Titan
Moderator
The tricky part is that you have to demonstrate a demand-sensitivity to any price cuts that would more than offset the reduced margins through increased volume. During recessionary times, there's certainly a lot of price-sensitivity, but maybe not enough to drive enough additional sales at the high end.
They may worry about "price sensitivity" now but what will it be 3-5 years down the line when game developers look at what GPUs their target audience is using and 60+% of it is using GTX2060s or worse? By pricing the entry-level out of 50+% of the market's budget today, Nvidia and AMD are forcing future game developers to either forfeit half of their potential audience or continue targeting 5-10 years old hardware.

This is going to be fun when AMD and Nvidia announce their next round of driver support cuts for still widely used GPUs like the RX480 and GTX10xx that are still competitive against the hot garbage AMD and Nvidia still try to shovel down people's throats at the low end today for $150+. This may not go well if they do so without providing definitive legitimate successors for the budget crowd.
 
  • Like
Reactions: JamesJones44