News Hacker successfully tests Toslink at unprecedented distances of up to 143 kilometers — separate test shows transmission speeds of about 1.47 Mb/s

Toslink is one of those technologies from the 80's that's still good at what it does. The main drawback is that they haven't extended it to support the latest audio formats, instead preferring to keep them tied to HDMI and its content protection mechanisms. However, you can use regular Toslink for 24-bit, 96 kHz PCM stereo, which is enough for any hi-fi applications.

BTW, 24-bit 96 kHz stereo is 4.6 Mbps. Toslink definitely supports that, even if the signal he used for testing was just 16-bit 44.1 kHz stereo.
 
Last edited:
  • Like
Reactions: stonecarver
Toslink is one of those technologies from the 80's that's still good at what it does. The main drawback is that they haven't extended it to support the latest audio formats, instead preferring to keep them tied to HDMI and its content protection mechanisms. However, you can use regular Toslink for 24-bit, 96 kHz PCM stereo, which is enough for any hi-fi applications.

BTW, 24-bit 96 kHz stereo is 4.6 Mbps. Toslink definitely supports that, even if the signal he used for testing was just 16-bit 44.1 kHz stereo.
I was using Toslink with Dolby Digital or DTS in 5.1 24bits 192khz (with Dolby Digital Live, the next version on S/PDIF (Toslinq) Dolby Digital Plus accept 16 channels and 6144 kb/s). They removed the support to go HDMI (which force a phantom screen for an audio amply) with monthly subscriptions to have full functionalities and can have issues.
With Toslinq you just connect and its work but nowadays it's limited to PCM.
It's false to think that Toslinq was abandonned because of it's limited specs, it was abandonned by Dolby and DTS because they have joined HDMI and forced the revokations of all licences on S/PDIF.
Most Toslinq users were using it for surround sound, not for stereo.
 
Most Toslinq users were using it for surround sound, not for stereo.
Yeah, it seems mostly a home theater technology, based on when it was introduced and how most people used it. Audiophiles also had the option of fancy S/T fiberoptic, which are only better in that they're less limited in length than Toslink and you can buy commodity networking cables for a lot cheaper than the audio-marketed ones.

I've always used Toslink in my computer audio setup. It provides complete electrical isolation, interfaces with my outboard DAC, and I have a nice crossbar switch for rerouting different sources and destinations. It was sad to see Toslink ports disappearing from the rear connectors of PC motherboards, but you can buy USB -> Toslink converters quite cheaply and the one I have works perfectly (bit-perfect), even with a Raspberry Pi.
 
  • Like
Reactions: stonecarver
We need TOSLINK 2.0.

Toshiba made these transceivers more than 20 years ago.

https://www.global.toshiba/ww/news/corporate/2003/02/pr2001.html

It's capable of 150Mbps over TOSLINK optical Cables.

Time to bring it to the larger audio world & out of car audio land exclusive tech.

I'm sure audiofiles would have a field day with 150 Mbps to play with.

For Reference:
Base TOSlink 1.0 = 384 kbps
ARC = 1 Mbps
eARC = 38 Mbps
TOSlink 2.0 = 150 Mbps using these old Totu133 transceivers.
 
  • Like
Reactions: bit_user
The key difference between optical Toslink and copper (electrical) versions of SPDIF and HDMI, as well as other options, such as USB transmission, is galvanic isolation of the power supply circuits of the audio source and the DAC + amplifier, which immediately has a positive effect on the purity of the sound at the output, especially in terms of poorly protected audio circuits from interference and ground loops of motherboard circuits with a bunch of high-frequency interference and interference (including low-frequency harmonics) in PCs / laptops and other devices for outputting an audio signal with PWM power supplies and where there is no real grounding (and there is none in many places in the world).

High-quality audio circuits are a whole science and technology. The only way to avoid such interference is galvanic isolation, i.e. transmitting audio data via optics, which also has a positive effect in terms of the safety of connecting such cables to ungrounded devices.

Plus, optics, as shown above, easily transmits a signal over tens of meters where copper cable becomes monstrously expensive or heavy. An example is the whole 5 years of commissioning of DisplayPort 2.0 (UHBR20) version, which became a disgrace to the entire IT industry from 2019 to 2024. For 80Gbps transmission, cheap copper cables have ALREADY become a problem. And in order to get 8k@120Hz (banal smooth scrolling of text in 2D with high ppi (220-230+) for really high-quality text even in 27-32" office monitors), you need a bandwidth of 160Gbps, which is an impossible task for copper mass cheap cables. Which is proven by the shameful "newest" HDMI 2.2 (which will only be presented in hardware in 2025), which is practically no different from DisplayPort 2.0 2019 in bandwidth (80Gbps vs 96Gbps in both cases, taking into account service traffic - without it, everything is much worse and both options give nothing but 60-75Hz in 8k in lossless mode without lossy DSC compression), which is already morally obsolete without having time to be implemented in hardware. Since few people today need 8k monitors with a frame rate of only 60Hz, as well as TV.

I personally have been using toslink for many years - it is extremely convenient and unpretentious in use.
 
  • Like
Reactions: stuff and nonesense
I use Toslink for the connections to a now positively ancient Yamaha DSP A-5. It connects a sky box, and a Samsung s95b digital out. Blu ray is connected by rca style connector cable.

Electrically I prefer Toslink because it gives electrical isolation between components, less electrical noise. (Yes I know it’s digital, and the processing is essentially identical but I mean noise transfer into the system that can affect power stability thus causing hum in the amp stages. I had a cheap CD player that was horrible for this. Got to remember that 0V is a relative value within a circuit. A sensor suite installation of a system across a field, both ends grounded, had a 17V offset in the measured 0V level.

Toslink does still work with Dolby Digital and DTS just not later iterations. The content providers seeing HDMI have tied their stuff to the harder to copy/break out standard. DD and DTS are transferred through the RCA style connector digital out from my Sony Blu Ray.

Toslink was designed for a purpose, it fulfils that purpose (lots of punny adjectives come to mind - I’ll use) transparently. Clean cabling with a simple connector that has a positive latch - it is positively idiot proof.

For its designed purpose I’ve found no drawbacks over 30 years (old kit was a technics midi separates system. I’m still using the speakers.
 
The key difference between optical Toslink and copper (electrical) versions of SPDIF and HDMI, as well as other options, such as USB transmission, is galvanic isolation of the power supply circuits of the audio source and the DAC + amplifier, which immediately has a positive effect on the purity of the sound at the output, especially in terms of poorly protected audio circuits from interference and ground loops of motherboard circuits with a bunch of high-frequency interference and interference (including low-frequency harmonics) in PCs / laptops and other devices for outputting an audio signal with PWM power supplies and where there is no real grounding (and there is none in many places in the world).
I don't know about HDMI, but hi fi equipment typically uses isolation transformers on S/P-DIF inputs to avoid ground loops. These cables tend to be shielded, with the shield connected only at one end, to avoid creating a path between both chassis. Hi fi equipment also tends to segregate the digital vs. analog sections via different boards or at least sectioning them off into different areas with their own power domains.

High-quality audio circuits are a whole science and technology. The only way to avoid such interference is galvanic isolation, i.e. transmitting audio data via optics, which also has a positive effect in terms of the safety of connecting such cables to ungrounded devices.
I'm not going to argue against fiber optics, since I use a lot of toslink in my computer audio setup (largely with an eye towards isolating power surges between my audio rack and computers), but it does seem to me that between R/C circuits and isolation transformers, you can do a lot to block both high frequency and low frequency interference from the signal sources.

Plus, optics, as shown above, easily transmits a signal over tens of meters where copper cable becomes monstrously expensive or heavy.
Again, the range advantages of optical are obvious, but not at the kinds of data rates audio uses - copper is completely fine, there. 5GBase-T supports simple Cat 5e cabling at up to 100m.

An example is the whole 5 years of commissioning of DisplayPort 2.0 (UHBR20) version, which became a disgrace to the entire IT industry from 2019 to 2024. For 80Gbps transmission, cheap copper cables have ALREADY become a problem.
How do those cables compare with Cat 8? 40GBase-T supports Cat 8 cables at up to 30m. If you just use all pairs in the cable for transmit, it could hit 80 Gb/s.

My guess is that VESA was probably trying to cheap out on the transceivers, which is why they needed such a big cable with severe length limits. However, I don't know if you've ever looked at pricing on SFP+ optical transceiver modules for 10 gigabits or above, but those things aren't exactly cheap, either.

And in order to get 8k@120Hz (banal smooth scrolling of text in 2D with high ppi (220-230+) for really high-quality text even in 27-32" office monitors), you need a bandwidth of 160Gbps, ...

I personally have been using toslink for many years - it is extremely convenient and unpretentious in use.
I'm pretty sure Toslink isn't going to get you 160 Gbps, or anywhere remotely close. So, this whole display connectivity aspect is pretty irrelevant to any discussion of Toslink or audio connectivity. Also, glass fiber optics aren't terribly consumer-friendly, which is probably why the industry has balked at their adoption.
 
  • Like
Reactions: TJ Hooker
How do those cables compare with Cat 8? 40GBase-T supports Cat 8 cables at up to 30m. If you just use all pairs in the cable for transmit, it could hit 80 Gb/s.

My guess is that VESA was probably trying to cheap out on the transceivers, which is why they needed such a big cable with severe length limits. However, I don't know if you've ever looked at pricing on SFP+ optical transceiver modules for 10 gigabits or above, but those things aren't exactly cheap, either.
You answered the question yourself - a copper solution for such speeds becomes economically impractical in the mass case. A single-core optical cable, even for 30-50 m, costs pennies and easily transmits 200 Gbit/s if desired, and the capabilities of transmitters are only growing.

What can I say - even drones in wars are already launched with control over the thinnest optical fiber (like a fishing line for a fish) on a reel smaller than a smartphone with a cable length of up to 15 km, transmitting a video signal.

So it's time for us to completely abandon copper in everyday life, except for power cables. And there is no problem with careful maintenance of household optical ports and cables - if a person behaves like an outright slob - he will quickly put his behavior in order economically. All that is needed is desire from corporations. But it is still not there, although it is high time ...
 
  • Like
Reactions: stuff and nonesense
You answered the question yourself - a copper solution for such speeds becomes economically impractical in the mass case. A single-core optical cable, even for 30-50 m, costs pennies and easily transmits 200 Gbit/s if desired, and the capabilities of transmitters are only growing.
But the optical transceiver cost is not trivial. I also wonder at what point fiber optics becomes an eye hazard.

What can I say - even drones in wars are already launched with control over the thinnest optical fiber (like a fishing line for a fish) on a reel smaller than a smartphone with a cable length of up to 15 km, transmitting a video signal.
Seems like it would have substantial caveats. Do you have any sources on that?

So it's time for us to completely abandon copper in everyday life, except for power cables.
There are reasons why it hasn't happened. If optical it were truly a better all-around solution, we'd already be using it.
 
How do those cables compare with Cat 8? 40GBase-T supports Cat 8 cables at up to 30m. If you just use all pairs in the cable for transmit, it could hit 80 Gb/s.
https://www.qsfptek.com/qt-news/eve...k;jsessionid=5FF5A07AAEC791E9A9CD9C24663731E6

You answered your own questions.

CAT8 is recommended in the linked page for runs less than 30 m. That recommendation is justified by the increased costs associated with transmitting and receiving the fibre signals.

CAT 8 is limited to 30 m so fibre takes over at that point. The engineer in me would have 20m CAT 8 cables, got to have a tolerance derating in there somewhere, so equipment would need to be sited appropriately.
 
https://www.qsfptek.com/qt-news/eve...k;jsessionid=5FF5A07AAEC791E9A9CD9C24663731E6

You answered your own questions.

CAT8 is recommended in the linked page for runs less than 30 m. That recommendation is justified by the increased costs associated with transmitting and receiving the fibre signals.
When I asked how it compares, that wasn't what I meant. I had in mind how the cables compare on the parameters which make UHBR20 undesirable, such as cost and usability factors (e.g. bending radius). I don't include length, because I already pointed out that Cat 8 has the advantage there.

CAT 8 is limited to 30 m so fibre takes over at that point. The engineer in me would have 20m CAT 8 cables, got to have a tolerance derating in there somewhere, so equipment would need to be sited appropriately.
If we're talking about display cables, then we only need a passive solution capable of like 3m for probably 97% of the market. That's good enough, as the rest (mostly professional installations) can probably afford the trouble and expense of a fiber-based solution, like many of them probably do today.

BTW, the best price I can find on a 100 Gbps SFP+ fiber optic transceiver module is about $60. So, that's a pretty big argument why switching to optical isn't an easy solution for display cabling.

FWIW, QSFP28 (OM4) seems limited to 100m.
 
You pick best solution for your problem.
The factors to investigate:

Cost
Installation limitations
Physics - LC interactions vs frequency for copper, reflections and coherency in fibre.

At 2m CAT8 is a no brainer. Over 30m but less than 100m the cheaper multimode fibre is suitable and over 100m you are looking at single mode fibre.

For a new install you design in the bend radius requirements and cost appropriately (both space and financial).

If a job can be done within what the customer wants to pay then it gets done. It just comes down to how.
As data rates increase copper becomes less suitable over distance (plain physics) relative to fibre. Attenuation due to the cable capacitance (primarily) is the problem. No doubt some genius way above my pay grade will eek out the last few MHz but fibre doesn’t exhibit those losses.

Multimode fibre has problems with bends and choherency with multiplexed signals. Different frequencies react differently within the glass tunnel. The differences in reflection and refraction cause the demultiplexed signals to be seen at the receiver at slightly different times. Once a the “limit” is seen the data is corrupted.
Single mode fibre as I remember from a long time ago used TDM to send data quickly in a single stream, coherency is no longer a factor.

There is no best solution, there is what works best for your budget (and within the laws of physics) right now.

It would be wasteful to use fibre over 2m, CAT8 is good enough.
With current techniques it would be wrong, wrt reliability, to use CAT8 at distances close to its 30m limit.

As data rates increase fibre will be the dominant connection method, the extra costs of terminations and transceivers will be a necessary price to pay if you want the speed. They already are if you want to go over 30m.

An aside, one application I look after uses HDMI 1.4 over 10m, the cables are active in that they have repeaters within the cable assemblies (and are unidirectional because of this). Without the repeaters 10m is outside the limit of the cables. The solution was chosen based on cost. There may be better available but I went with the active cables as they work reliably.
 
Last edited:
But the optical transceiver cost is not trivial. I also wonder at what point fiber optics becomes an eye hazard.
The price will also fall quickly along with the mass of solutions. As with everything else. For eye safety, you can use "foolproof" protection as in the case of USB-C - do not turn on full power until there is confirmation from the other side and constantly monitor it - the slightest loss of the control signal and an instant shutdown of the powerful laser. Well, I do not think that at distances of 50-100 for fiber optics powerful lasers are needed with modern technologies.

Seems like it would have substantial caveats. Do you have any sources on that?
Forbes и RadioFreeEurope


Anyone who has a projector at home, to which a lot of equipment is connected, including computers in other rooms, already knows how difficult it is to get high-quality HDMI cables, especially for 4k@60fps in 30-bit color at distances between switches in the wiring of 5-15m. With optics, everything is solved easily and simply, except for too steep bending points, where copper is much more convenient.

In general, it is obvious that the age of copper as a signal cable is coming to an end and the industry has been needing to massively accustom consumers to the idea of switching to optics for several years now and develop cheap mass solutions for 300-500 Gbit/s for starters.

Another important thing - if you have a 1 Tb/s channel, you can take the GPU out of the computer case (hello to everyone with a 5090 heater at 575 W) into a separate room (for example, a utility room with ventilation) - ensuring complete silence (processors can still be cooled to a completely silent level even when overclocked). Or even the entire system unit. And also connect disk arrays in NAS via this channel (and HDDs have become so delicate that you shouldn't even walk near them, this is especially dangerous if there are large animals or children running around in the house) at breakneck speed for backup and everything else, also taking everything out to the utility room. And SSD NAS will only benefit from this.

Only the necessary peripherals with a monitor remain on the table. Complete silence in operation, even for a powerful gaming or computing system...

Now eGPU, for example, suffer greatly from an extremely narrow TB4 channel (pci-e 3.0 x4) and only 80 cm of cable (maximum 1M on high-quality ones), while the connection via optics will be easy and problem-free, with minimal risks.

Wherever you look, there are only advantages. And prices - they are deliberately kept high for individuals, especially since this is now far from a mass case. In mass production they will fall 10 times at once and 2 transmitters for $20 in hardware for 500Gbps-1Tbps will clearly not be a financial problem, especially against the backdrop of increasingly insane prices for GPUs and everything else...

I would now like to see at a price of $2000 for 5090 to have an optical port for 1Tbps, capable of receiving and transmitting data at least 50m. Then the "leather jacket" will be able to justify such a price for its new "room heater" in my eyes... Someone should already be a pioneer, have the courage to start...
 
Last edited:
The price will also fall quickly along with the mass of solutions.
Optical networking transceivers are a mass product, I'm sure with many millions sold, annually!

Thanks for the links. Nowhere does it say what bandwidth they support, but I'd guess it's probably not very high. The video signal they're transmitting over it is definitely compressed. For instance, UHD blu-ray (4k) has a max sustained bitrate of only 144 Mbps (18 MB/s), which is like 0.1% of what you'd need for the 8k 120 Hz @ 36 bpp.

Actually, considering how good UHD blu-rays look, I don't really get why you're so concerned about DSC. Such a modest compression ratio shouldn't be noticeable at such a resolution.

In general, it is obvious that the age of copper as a signal cable is coming to an end and the industry has been needing to massively accustom consumers to the idea of switching to optics for several years now and develop cheap mass solutions for 300-500 Gbit/s for starters.
In the past, I would've agreed. But then Cat 8 doing 40 Gbps (full-duplex) at 30m its pretty compelling evidence to the contrary.

Another important thing - if you have a 1 Tb/s channel, you can take the GPU out of the computer case (hello to everyone with a 5090 heater at 575 W) into a separate room (for example, a utility room with ventilation)
You don't need 1 Tb/s. PCIe 4.0 x16 is only 256 Gbps and already more than fast enough for even the top-end gaming cards. However, the solution you're proposing sounds very expensive and niche. I don't foresee a large enough market to make it happen.

And also connect disk arrays in NAS via this channel
Well, a NAS is networked, so just put the whole thing wherever you want. However, if you really want to disaggregate the physical storage, you can currently use Fiber Channel (which is the high-end, expensive solution) or iSCSI (which can run over commodity Ethernet). For storage, 10 Gbps is more than enough. Cat 6A cable can support such speeds at up to 100m.

prices - they are deliberately kept high for individuals,
If you're making accusations of collusion on pricing, you should provide some evidence.

especially since this is now far from a mass case. In mass production they will fall 10 times at once and 2 transmitters for $20 in hardware for 500Gbps-1Tbps will clearly not be a financial problem, especially against the backdrop of increasingly insane prices for GPUs and everything else...
You're contradicting yourself, here. If mass production were a magic bullet, then GPUs should cost far less.

Optical networking already is a commodity, but for commercial, industrial, and professional markets. There's plenty of volume, already. And when big customers like Google buy millions of something, you'd better believe they negotiate the price down to within a hair of it costs to produce in volume. We may not have the same pricing power, but we benefit from the production efficiencies they drive.

There could even be a contrary phenomenon, where increasing volume could actually increase prices, if these high-bandwidth transceivers turn out to use significant amounts of rare earth metals that are in limited supply. I won't rule out the possibility that we'll see breakthroughs in this area, particularly as servers begin to embrace optical interconnects for connectivity within the chassis, but it's just not obvious to me that driving down prices is simply a matter of more volume.

Someday, we might indeed transition over to optical cables for these sorts of applications, but it looks to me like it's at least a ways off, yet.
 
Last edited:
Optical networking transceivers are a mass product, I'm sure with many millions sold, annually!


Thanks for the links. Nowhere does it say what bandwidth they support, but I'd guess it's probably not very high. The video signal they're transmitting over it is definitely compressed. For instance, UHD blu-ray (4k) has a max sustained bitrate of only 144 Mbps (18 MB/s), which is like 0.1% of what you'd need for the 8k 120 Hz @ 36 bpp.

Actually, considering how good UHD blu-rays look, I don't really get why you're so concerned about DSC. Such a modest compression ratio shouldn't be noticeable at such a resolution.


In the past, I would've agreed. But then Cat 8 doing 40 Gbps (full-duplex) at 30m its pretty compelling evidence to the contrary.


You don't need 1 Tb/s. PCIe 4.0 x16 is only 256 Gbps and already more than fast enough for even the top-end gaming cards. However, the solution you're proposing sounds very expensive and niche. I don't foresee a large enough market to make it happen.


Well, a NAS is networked, so just put the whole thing wherever you want. However, if you really want to disaggregate the physical storage, you can currently use Fiber Channel (which is the high-end, expensive solution) or iSCSI (which can run over commodity Ethernet). For storage, 10 Gbps is more than enough. Cat 6A cable can support such speeds at up to 100m.


If you're making accusations of collusion on pricing, you should provide some evidence.


You're contradicting yourself, here. If mass production were a magic bullet, then GPUs should cost far less.

Optical networking already is a commodity, but for commercial, industrial, and professional markets. There's plenty of volume, already. And when big customers like Google buy millions of something, you'd better believe they negotiate the price down to within a hair of it costs to produce in volume. We may not have the same pricing power, but we benefit from the production efficiencies they drive.

There could even be a contrary phenomenon, where increasing volume could actually increase prices, if these high-bandwidth transceivers turn out to use significant amounts of rare earth metals that are in limited supply. I won't rule out the possibility that we'll see breakthroughs in this area, particularly as servers begin to embrace optical interconnects for connectivity within the chassis, but it's just not obvious to me that driving down prices is simply a matter of more volume.

Someday, we might indeed transition over to optical cables for these sorts of applications, but it looks to me like it's at least a ways off, yet.
Optical connections haven’t been and are still not necessary in a domestic environment. Servicing a small market makes the cost of parts from albeit niche retailers somewhat expensive.
Industrial and commercial buyers have the luxury of bulk purchase pricing. They have the ability to squeeze the price which domestic purchasers simply can’t.

Cat 8 at 40Gb over 30m, Ethernet has normally worked over 100m per link. The reductions are real. As frequencies increase distances will reduce because.. physics. At a point (my cables from router to PC are 3m, I won’t see the limit) ethernet will become too limited by distance. Fibre to the premises will be necessary, fibre to the cab is already somewhat limited (I have a mere 60Mb/s fttc, gigabit is already common fttp.)

PCIe 4 is 256Gb/s, 5 is 512. Assuming the same progression 1Tbs is only an iteration away. It won’t be niche.

Prices kept high for individuals : the volume sold to individuals is low. Your friendly pc store isn’t going to handle enough of the parts to keep stocks therefore a special order is going to be needed. Depending on the more company you may or may not be permitted to buy from them. Been there and it’s a real pain. The the googles, amazons etc will buy direct from the manufacturer. Smaller commercial and domestic will buy from increasingly expensive distributors. The more you buy the less you pay per unit.

Gpu and cpu costs are at the level the market will take. Don’t buy and the price will come down to a point. They also have a short lifetime over which design costs need to be recovered. The vast profits shown by Nvidia show that mass production and high prices are not mutually exclusive. I’d be interested to see the amount of commonality between RTX and data center cards.

(Opinion) gpu manufacturing prices are low wrt retail price. The dominant vendor is working like a monopoly and the competition while pricing close to competitively is unable to significantly penetrate the market. The dominant vendor can charge what they want. People will buy their kit.

Domestically there is currently little to no compelling reason to switch to fibre, commercially applications exist which will seriously benefit from the bandwidth and at an industrial scale (Google at the extreme) fibre is a necessity.
 
PCIe 4 is 256Gb/s, 5 is 512. Assuming the same progression 1Tbs is only an iteration away. It won’t be niche.
If PCIe 6.0 hits client devices anywhere near release. The problem is that nothing uses the bandwidth already available. Testing indicates the RTX 4090 loses at worst single digit percentages going from PCIe 4.0 x16 to PCIe 3.0 x16. That's a pretty clear indicator it doesn't need anywhere near the maximum bandwidth 16 lanes of PCIe 4.0 is capable of delivering, but does need a bit more than PCIe 3.0. I would be surprised if the RTX 5090 loses anything going from PCIe 5.0 to 4.0.

So if not video cards then you're looking at storage being the biggest user of bandwidth. However for client applications this is extremely limited by what data is being accessed, the drive types and where the data is going to.

Right now there simply isn't the necessity for client devices having any connections which mandate anything beyond copper.
 
  • Like
Reactions: bit_user
Another important thing - if you have a 1 Tb/s channel, you can take the GPU out of the computer case (hello to everyone with a 5090 heater at 575 W) into a separate room (for example, a utility room with ventilation) - ensuring complete silence (processors can still be cooled to a completely silent level even when overclocked). Or even the entire system unit. And also connect disk arrays in NAS via this channel (and HDDs have become so delicate that you shouldn't even walk near them, this is especially dangerous if there are large animals or children running around in the house) at breakneck speed for backup and everything else, also taking everything out to the utility room. And SSD NAS will only benefit from this.
Active cabling and using networking already solves this entire thing outside of extreme edge cases. Edge cases do not make mass market happen. Just look at 10Gb networking for home users. By now it really should have basically been standard on everything. The last 2-3 generations of Marvell controllers have been plenty low enough power and Realtek has cheap switch options. However anything in consumer segments that has 10Gb commands a premium price.

I ccertainly agree that optical is better, and especially for video output as resolutions and refresh rates have gone up. The market just doesn't really exist for it though. VESA dropping the ball on UHBR20 cables should tell everyone how little the industry considers practicality.

Ever since the first 4K/120Hz monitors came out DSC has been used and improved upon. Realistically it isn't going anywhere because it's cheaper to use that than switch to optical everything. If it wasn't for problematic implementations and compatibility issues (sup nvidia) most people would never even notice DSC. Given how it's the only mandatory part of DP2.1 that should really say everything with regards to the emphasis.
 
If PCIe 6.0 hits client devices anywhere near release. The problem is that nothing uses the bandwidth already available. Testing indicates the RTX 4090 loses at worst single digit percentages going from PCIe 4.0 x16 to PCIe 3.0 x16. That's a pretty clear indicator it doesn't need anywhere near the maximum bandwidth 16 lanes of PCIe 4.0 is capable of delivering, but does need a bit more than PCIe 3.0. I would be surprised if the RTX 5090 loses anything going from PCIe 5.0 to 4.0.

So if not video cards then you're looking at storage being the biggest user of bandwidth. However for client applications this is extremely limited by what data is being accessed, the drive types and where the data is going to.

Right now there simply isn't the necessity for client devices having any connections which mandate anything beyond copper.
I agree, the main use is going to be for infrastructure. Perhaps uncompressed video where compression will degrade the image beyond the required use. (Very niche). The time will come where people want the speeds and then fibre will become a consumer level commodity.

Wrt PCIe 4, 5 and 6 and GPUs people said the same about PCIe 3, again it’s a matter of time.
 
Testing indicates the RTX 4090 loses at worst single digit percentages going from PCIe 4.0 x16 to PCIe 3.0 x16. That's a pretty clear indicator it doesn't need anywhere near the maximum bandwidth 16 lanes of PCIe 4.0 is capable of delivering, but does need a bit more than PCIe 3.0. I would be surprised if the RTX 5090 loses anything going from PCIe 5.0 to 4.0.
With Nvidia now officially acknowledging that RTX 5090 is only 30% faster than RTX 4090, it's clear that PCIe 5.0 support is only there for AI - not games.

I still have yet to see a solid argument why client PCs need anything above PCIe 4.0. Okay, maybe the fastest PCIe 5.0 NVMe drives shave a few % off the worst game loading times, but that's still a pretty tenuous argument.

I'd be pretty surprised if client PCs embrace PCIe 6.0, in the next few years. The added signal integrity is going to add build cost to these boards, and there were lots of complaints about costs when they went to PCIe 5.0 and DDR5. I trust manufacturers are going to remember that.
 
Wrt PCIe 4, 5 and 6 and GPUs people said the same about PCIe 3, again it’s a matter of time.
PCIe 3.0 came to clients in 2012. It took 7 years for PCIe 4.0 to follow, by which point it was still not necessary either for GPUs of the day or NVMe drives. Seriously, it took until 2020 before we had NVMe drives that could actually use more bandwidth than PCIe 3.0 x4 could provide. Still, if we go by the 7 year incubation period of PCIe 3.0 -> 4.0, then PCIe 5.0 should have arrived in 2026, not 2021.

What's compounding the situation is that the pace of GPU improvements is actually slowing, because the main way GPUs got faster was by riding the cost per transistor curve, in order to deliver more and more "cores" every generation. As that cost curve is flattening, the only way GPUs can keep offering more "cores" is by increasing in price, to the point where top end models are irrelevant to all but a tiny % of gamers.

So, I don't actually see faster GPUs driving the need for PCIe scaling, the way they used to. We don't need PCIe 5.0 today. That won't change much by 2026, and we certainly won't need PCIe 6.0.
 
Industrial and commercial buyers have the luxury of bulk purchase pricing. They have the ability to squeeze the price which domestic purchasers simply can’t.
No, they use optical because they have buildings large enough and data rates high enough to justify it. Perhaps also concerns like wanting electrical isolation?

Prices kept high for individuals : the volume sold to individuals is low. Your friendly pc store isn’t going to handle enough of the parts to keep stocks therefore a special order is going to be needed.
We're not talking about a local PC shop, though. The prices I cited were the best online prices I could find, and that's from a store that probably most small & medium businesses also use.

Depending on the more company you may or may not be permitted to buy from them. Been there and it’s a real pain. The the googles, amazons etc will buy direct from the manufacturer.
Right, so they get better pricing on everything, whether it's DDR5 DIMMs or copper ethernet cables or optical transceiver modules. Probably similar price breaks, across the board.

Smaller commercial and domestic will buy from increasingly expensive distributors. The more you buy the less you pay per unit.
Even in 10k quantities, you can usually only save a few %. As you say, the way to get bigger savings than that is to negotiate directly with the manufacturer.

I’d be interested to see the amount of commonality between RTX and data center cards.
Depends on which datacenter cards you're talking about. The low-end ones (e.g. L4, L40) use the exact same GPUs and consumer graphics cards, but have de-rated specs for 24/7 operation, more RAM, and a passive heatsink. They cost only about 5x as much as the consumer graphics cards (list price) that use the same silicon.

The training/HPC GPUs use the 100-series dies, which are completely different than consumer GPUs. They're the ones that cost $30k and up. They differ in lots of ways, not least of which is NVLink connectivity and using HBM-class memory. On die, they have different compute resources, as well. The dies are also larger, but not nearly enough to justify the price difference.

(Opinion) gpu manufacturing prices are low wrt retail price. The dominant vendor is working like a monopoly and the competition while pricing close to competitively is unable to significantly penetrate the market. The dominant vendor can charge what they want. People will buy their kit.
Only for Nvidia and only when their products are in high demand.