It's strange that you don't understand this - when you work on a screen, if it's not a video signal - the picture should be 100% without any loss anywhere and that's why DSC is such a deception, like DLSS and like processor caches, when there is not enough performance or bandwidth. I don't want to see an approximation to something. I want to see 100% picture integrity, as it was in the source. So forget about DSC when it comes to work.
I suppose someone looking at radiology images or doing professional graphics work would like to have some assurance they're never going to see compression artifacts introduced by their display pipeline. That said, I think even the best LCD panels can only do like 10-bit color, natively. The way most do 10 bit and probably how all of them do 12-bit is via FRC. So, that's a little bit of a cheat and introduces some image noise that isn't in the source. And don't even start on array backlighting.
You yourself confirm that copper is a thing of the past. It's a pitiful 40 Gbit/s,
First, it's 80 Gbps. The 40 Gbps is when you're using it in full-duplex mode. I already pointed this out.
Secondly, that's at 30m. For a display cable, you should be able to get even higher bitrates through it, at they kind of lengths they typically use.
when you need 400 Gbit/s here and now,
No, you don't. Your dream use case is only 143 Gbps.
or better yet 1 Tbit/s for everything at once, as a single interface with the periphery. Copper can't provide such speeds for economic reasons.
You should look at what 800 Gbps datacenter networking gear costs, some time. That's not economical, either.
Optics can easily. Why they're dragging their feet with it - I don't understand.
This is exactly the problem. You don't know anything about how fiber optics is implemented. To you, it's just an abstraction that you look at no differently than any other technology. Try doing a deep dive and learning what's involved in hitting such high data rates, and then you might understand why it's so expensive.
Again, no, with a NAS made on zRAID6 (with data integrity control, which is not in HDD/SSD firmware and has never been in reality) with an array of PCI-E SSD disks (the prices of which are gradually falling, which increases their economic sense in household use). And modern HDDs can give much more with 6 pieces and more than 10 Gbit/s.
By definition, the speed at which a NAS needs to access its media is limited by the network it's on. If the front end of your NAS is 10 Gigabits, then your data path to the storage only needs to be that fast. However, the entire premise of this point is ridiculous, because the thing about a NAS is that you can just put the entire thing wherever you want. There's no reason to disaggregate the storage and the frontend.
Again, you forget - this is a single channel for the system unit to the periphery. Perhaps more than one channel...
You're trying to argue that every consumer device needs to have some ridiculous link, just so you can use it to support these 0.01% niche use cases that fall squarely in the realm of what enterprise hardware already does? Just use the enterprise solution then. There's no reason to make everyone else pay extra for an overdesigned solution far beyond anything they actually need.
I can logically develop the theory of intentional restrictions for individuals, but these thoughts are too voluminous and more empirical.
If you don't have evidence, then I'm not interested in hearing it.
The US, with the connivance of antitrust authorities, has created a number of monsters. These monsters are convenient for the government in geopolitical interests. Strong competition eliminates these effective levers, as well as the possibility of secret influence on oligopolies/monopolies. But this is a completely different topic. Large and complex.
To say the government
wants big corporations is utter nonsense. About the only part I'd agree with is that anti-trust authority has been underused, although they successfully used it to prevent the Nvidia/ARM acquisition, as well as a few other large corporate mergers that were attempted, recently. Not only that, but there was recently a ruling that hit Google pretty hard.
No, it's not a mass market of hundreds of millions and billions of copies. Size matters in terms of cost and target audience.
Well, I think the onus is on you to show how and why it would get cheaper, when scaling from millions to billions.
I am sure that this will happen soon enough,
If you'd have asked me 20 years ago when I thought 10GBase-T would become a mass market commodity, I sure wouldn't have guessed as long as 20 years. And we're still a pretty long ways off from even having every high-end motherboard including it.
It is just that someone, like when Jobs with smartphones, has to start first, creating the quintessence of all technologies into a single new trend.
Steve Jobs/Apple did not invent the smartphone. He just took stuff everyone was already doing and made it simpler, better, and marketed the heck out of it.
Oh, and by the way,
not cheaper!
There are very few brave people among MBAs and top management, they are increasingly concerned with immediate profits, but not with technology and progress. Even to the detriment of progress...
There are plenty of good engineers who know how to pair the right solution with the problem. Good engineers know how to balance tradeoffs and start with the problem and work backwards, rather than starting with a solution and trying to justify it.
The real issue is that you have nothing but extreme, niche use cases. By definition, you're never going to find mass market solutions which cater to those.