I don't think you understand how computer prices work. They don't go up with time - they do down.
You have this thing about trends...
Trends are fundamentally
descriptive, not
prescriptive. Did you ever hear the standard investment disclaimer: "past performance is no guarantee of future results"? That applies quite broadly.
The proper way to use trends is to understand the underlying dynamics of a system. Upon modeling those dynamics, you can make projections. The improper way to use trends is to expect them to continue forever. Simple extrapolation is a bad idea, because
virtually every trend will eventually breakdown, at some point. Pretty much the only trends immune from that are cosmological, like the expansion of the universe - and even that is true only
in aggregate, but not locally.
Talking specifics of historical tech pricing trends, they have to do with things like:
- manufacturing nodes increasing in density faster than wafer price
- uncharacteristically long period of low-inflation
- a more competitive semiconductor manufacturing sector
Factors that no longer apply to the same degree, if at all.
Memory and storage have done nothing but drop in price as the years passed.
As a long term trend, yes. In the shorter term, like on the scale of a year or so, we've seen DRAM price increases by nearly 2x, from one year to the next. I'm not sure NAND has ever increased by that much, but it's not been a monotonically-decreasing trend, like you paint.
It also needs to be pointed out that NAND has been benefiting from two additional factors that don't apply to any other semiconductor product, currently in production:
- multi-bit cells
- 3D cell structure
These two techniques have allowed NAND bit-densities to increase significantly faster than any other aspect of semiconductor technology, over the same period of time. Therefore, it's wrong to use NAND as a standard by which to judge other chips.
Boards have tracked with inflation and CPUs have gone up a little bit, but nothing close to inflation. GPUs are the exception and most of their pricing has been due to shortages, crypto demand, and anti-competitive behavior (price fixing).
First, CPUs didn't have to respond to the crypto craze in the same way. You could host many GPUs running crypto from a single CPU, and even a fairly low-end one, at that.
Furthermore, with GPUs, I think Nvidia and AMD both saw what prices the market seemed willing & able to support. This informed their decision about how big to make their next generation.
Second, GPUs rely on die area much more than CPUs, for their performance. A $600 Raptor Lake i9-13900K is just 257 mm^2 on a 7 nm node, whereas a RTX 4070 Ti is 294.5 mm^2 on a more expensive node,
plus it has to include DRAM, a PCB, a 285 W thermal solution, and there's board maker who applies their own markups between Nvidia and the channel. This makes GPUs much more sensitive to wafer pricing than CPUs.