Well, some people have reported that there is in fact a difference. The question is only "where" this difference comes from. Of course you are correct and this does not come from affecting the digital data as is. Some ideas:
a.) Inside the switch for 2500 USD you could have a bunch of DSP logic, to recognize audio, unpack the network packets, modify the audio and repack it back. Maybe there is some typical fault in the streaming audio, which can be fixed algorithmically. For example:
- Increasing sampling frequency by 2x by using bandlimited (perfect) interpolation. Many audio receivers do 96kHz or more up-sampling before playback.
- Use a harmonizer to introduce higher harmonics from existing audio in to the new higher frequency range. (above16kHz)
- There could be any number of algorithms that can take the mp3 and aac faults and try to mitigate the problems and improve something.
Why to put that in to a switch rather than a special driver? Maybe its not possible to do on all devices?
b.) There is also this thing called "power supply". You can affect the performance of any device on the power grid, by plugging in one more device. Each new device and even every power plant which is on or off the grid will have an influence on the power supply and the frequency spectrum of the 50/60Hz signal. You can tell exactly which power plants are online and which devices in your houshold are working or not, just by looking at the power grid voltage signal. It is possible that some of the power supply issues (1 or 2 bits) could leak in to the D/A converter (the soundcard) of the computer, which is playing back the audio. The source of that can also be the ethernet connection.
This does not mean that the 2500 USD switch does any of this. As much as the internet is overwhelmed with false information, you could have hustlers selling anything.
Atmapuri