News Hacker successfully tests Toslink at unprecedented distances of up to 143 kilometers — separate test shows transmission speeds of about 1.47 Mb/s

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
No, they use optical because they have buildings large enough and data rates high enough to justify it. Perhaps also concerns like wanting electrical isolation?
Yes, I have to disagree. Major companies or smaller companies with a larger scale project CAN AND DO negotiate discounts. Not to the level of buying from the manufacturer but the parts are cheaper than retail, consumers like you or I.
We're not talking about a local PC shop, though. The prices I cited were the best online prices I could find, and that's from a store that probably most small & medium businesses also use.
My point is and was scale. The vendors you found would be less expensive than a special order and that direct from manufacturer is even less expensive per unit.
Right, so they get better pricing on everything, whether it's DDR5 DIMMs or copper ethernet cables or optical transceiver modules. Probably similar price breaks, across the board.
Agree with that, you are agreeing with the message you quoted, thanks.
Even in 10k quantities, you can usually only save a few %. As you say, the way to get bigger savings than that is to negotiate directly with the manufacturer.
Depends on the size of the profit margin the vendor wants to take. There is a floor price. Cost of parts + cost of staff + cost of accommodation + tax and a percentage for profit. Can’t sell at a loss. As stated a Google level company will go to the manufacturer and cut out the middle man.
Depends on which datacenter cards you're talking about. The low-end ones (e.g. L4, L40) use the exact same GPUs and consumer graphics cards, but have de-rated specs for 24/7 operation, more RAM, and a passive heatsink. They cost only about 5x as much as the consumer graphics cards (list price) that use the same silicon.

The training/HPC GPUs use the 100-series dies, which are completely different than consumer GPUs. They're the ones that cost $30k and up. They differ in lots of ways, not least of which is NVLink connectivity and using HBM-class memory. On die, they have different compute resources, as well. The dies are also larger, but not nearly enough to justify the price difference.


Only for Nvidia and only when their products are in high demand.
 
Actually, considering how good UHD blu-rays look, I don't really get why you're so concerned about DSC. Such a modest compression ratio shouldn't be noticeable at such a resolution.
It's strange that you don't understand this - when you work on a screen, if it's not a video signal - the picture should be 100% without any loss anywhere and that's why DSC is such a deception, like DLSS and like processor caches, when there is not enough performance or bandwidth. I don't want to see an approximation to something. I want to see 100% picture integrity, as it was in the source. So forget about DSC when it comes to work. In entertainment, I don't mind, accuracy is not important there, although no one will refuse accuracy here either.

In the past, I would've agreed. But then Cat 8 doing 40 Gbps (full-duplex) at 30m its pretty compelling evidence to the contrary.
You yourself confirm that copper is a thing of the past. It's a pitiful 40 Gbit/s, when you need 400 Gbit/s here and now, or better yet 1 Tbit/s for everything at once, as a single interface with the periphery. Copper can't provide such speeds for economic reasons. Optics can easily. Why they're dragging their feet with it - I don't understand.

You don't need 1 Tb/s. PCIe 4.0 x16 is only 256 Gbps
Again, read what is written above and follow the context of the narrative.

. For storage, 10 Gbps is more than enough.
Again, no, with a NAS made on zRAID6 (with data integrity control, which is not in HDD/SSD firmware and has never been in reality) with an array of PCI-E SSD disks (the prices of which are gradually falling, which increases their economic sense in household use). And modern HDDs can give much more with 6 pieces and more than 10 Gbit/s.
Again, you forget - this is a single channel for the system unit to the periphery. Perhaps more than one channel...

If you're making accusations of collusion on pricing, you should provide some evidence.
I can logically develop the theory of intentional restrictions for individuals, but these thoughts are too voluminous and more empirical.

If mass production were a magic bullet, then GPUs should cost far less.
The devil is in the details, as always. In this case, NVidia has become a de facto monopolist on the market and can dictate prices. And after the mining boom, they have already artificially raised prices several times. They are simply taking advantage of the moment. And the prices will really fall as soon as healthy market competition is restored by at least 6-10 major players. If not, prices will be dictated to you from above. And you will not escape - because entering this niche is very expensive.

The US, with the connivance of antitrust authorities, has created a number of monsters. These monsters are convenient for the government in geopolitical interests. Strong competition eliminates these effective levers, as well as the possibility of secret influence on oligopolies/monopolies. But this is a completely different topic. Large and complex.
There's plenty of volume, already.
No, it's not a mass market of hundreds of millions and billions of copies. Size matters in terms of cost and target audience.

Someday, we might indeed transition over to optical cables for these sorts of applications, but it looks to me like it's at least a ways off, yet.
I am sure that this will happen soon enough, because copper at such speeds and required distances loses all economic sense even at the everyday level with the tasks set in everyday life at the moment. It is like comparing a railway and an airplane. The airplane won. It is just that someone, like when Jobs with smartphones, has to start first, creating the quintessence of all technologies into a single new trend. It is a pity that he is no longer with us, I am sure he would have seen the new emerging trend. There are very few brave people among MBAs and top management, they are increasingly concerned with immediate profits, but not with technology and progress. Even to the detriment of progress...
 
It's strange that you don't understand this - when you work on a screen, if it's not a video signal - the picture should be 100% without any loss anywhere and that's why DSC is such a deception, like DLSS and like processor caches, when there is not enough performance or bandwidth. I don't want to see an approximation to something. I want to see 100% picture integrity, as it was in the source. So forget about DSC when it comes to work.
I suppose someone looking at radiology images or doing professional graphics work would like to have some assurance they're never going to see compression artifacts introduced by their display pipeline. That said, I think even the best LCD panels can only do like 10-bit color, natively. The way most do 10 bit and probably how all of them do 12-bit is via FRC. So, that's a little bit of a cheat and introduces some image noise that isn't in the source. And don't even start on array backlighting.

You yourself confirm that copper is a thing of the past. It's a pitiful 40 Gbit/s,
First, it's 80 Gbps. The 40 Gbps is when you're using it in full-duplex mode. I already pointed this out.

Secondly, that's at 30m. For a display cable, you should be able to get even higher bitrates through it, at they kind of lengths they typically use.

when you need 400 Gbit/s here and now,
No, you don't. Your dream use case is only 143 Gbps.

or better yet 1 Tbit/s for everything at once, as a single interface with the periphery. Copper can't provide such speeds for economic reasons.
You should look at what 800 Gbps datacenter networking gear costs, some time. That's not economical, either.

Optics can easily. Why they're dragging their feet with it - I don't understand.
This is exactly the problem. You don't know anything about how fiber optics is implemented. To you, it's just an abstraction that you look at no differently than any other technology. Try doing a deep dive and learning what's involved in hitting such high data rates, and then you might understand why it's so expensive.

Again, no, with a NAS made on zRAID6 (with data integrity control, which is not in HDD/SSD firmware and has never been in reality) with an array of PCI-E SSD disks (the prices of which are gradually falling, which increases their economic sense in household use). And modern HDDs can give much more with 6 pieces and more than 10 Gbit/s.
By definition, the speed at which a NAS needs to access its media is limited by the network it's on. If the front end of your NAS is 10 Gigabits, then your data path to the storage only needs to be that fast. However, the entire premise of this point is ridiculous, because the thing about a NAS is that you can just put the entire thing wherever you want. There's no reason to disaggregate the storage and the frontend.

Again, you forget - this is a single channel for the system unit to the periphery. Perhaps more than one channel...
You're trying to argue that every consumer device needs to have some ridiculous link, just so you can use it to support these 0.01% niche use cases that fall squarely in the realm of what enterprise hardware already does? Just use the enterprise solution then. There's no reason to make everyone else pay extra for an overdesigned solution far beyond anything they actually need.

I can logically develop the theory of intentional restrictions for individuals, but these thoughts are too voluminous and more empirical.
If you don't have evidence, then I'm not interested in hearing it.

The US, with the connivance of antitrust authorities, has created a number of monsters. These monsters are convenient for the government in geopolitical interests. Strong competition eliminates these effective levers, as well as the possibility of secret influence on oligopolies/monopolies. But this is a completely different topic. Large and complex.
To say the government wants big corporations is utter nonsense. About the only part I'd agree with is that anti-trust authority has been underused, although they successfully used it to prevent the Nvidia/ARM acquisition, as well as a few other large corporate mergers that were attempted, recently. Not only that, but there was recently a ruling that hit Google pretty hard.

No, it's not a mass market of hundreds of millions and billions of copies. Size matters in terms of cost and target audience.
Well, I think the onus is on you to show how and why it would get cheaper, when scaling from millions to billions.

I am sure that this will happen soon enough,
If you'd have asked me 20 years ago when I thought 10GBase-T would become a mass market commodity, I sure wouldn't have guessed as long as 20 years. And we're still a pretty long ways off from even having every high-end motherboard including it.

It is just that someone, like when Jobs with smartphones, has to start first, creating the quintessence of all technologies into a single new trend.
Steve Jobs/Apple did not invent the smartphone. He just took stuff everyone was already doing and made it simpler, better, and marketed the heck out of it.

Oh, and by the way, not cheaper!

There are very few brave people among MBAs and top management, they are increasingly concerned with immediate profits, but not with technology and progress. Even to the detriment of progress...
There are plenty of good engineers who know how to pair the right solution with the problem. Good engineers know how to balance tradeoffs and start with the problem and work backwards, rather than starting with a solution and trying to justify it.

The real issue is that you have nothing but extreme, niche use cases. By definition, you're never going to find mass market solutions which cater to those.
 
Last edited: