News Quantum Communications Demoed Across Subsea Fiber Optics

Status
Not open for further replies.
  • Like
Reactions: purpleduggy
I think a somewhat critical detail was omitted from this article, @Francisco Alexandre Pires . Your earlier article explains that:

"The research finally opens the door to long distance Quantum Key Distribution (QKD). QKD is essentially a distribution protocol for encryption keys, albeit based on quantum physics - and is being hailed as the final frontier in encryption schemas. This "final frontier of security" is being touted on the basis of quantum physics, and the behavior of qubits, themselves: after data has been encrypted with a secure QKD key, it can then be sent over an insecure connection (such as the internet), where only the holders of the decryption key can access its contents."

This way, entanglement merely has to last long enough for the encryption key to be sent. I assume that's still the idea?
 
  • Like
Reactions: purpleduggy
  • Like
Reactions: purpleduggy
From what I've read, you can't use quantum entaglement for faster-than-light communications:

Perhaps wormholes are the best chance to break the speed of light - for communication, at least?
true, but most of the 90ms latency between EU and US is not a result of the speed of light limitations, its a result of additional hops and inefficient infrastructure in between that quantum entanglement can solve. the entanglement regardless of distance aspect is hugely beneficial even if it runs at just half of the speed of light. also dual or quad links can be used to multiply the throughput even if limited by the speed of light.
 
true, but most of the 90ms latency between EU and US is not a result of the speed of light limitations, its a result of additional hops and inefficient infrastructure in between
If you think about it, infrastructure really doesn't want to add latency, because that requires some form of RAM to buffer the packets. When you're talking about Terrabit/s links, adding 1 ms of latency means a Gigabit of buffering, but you need to simultaneously read & write the buffer memory at terrabit speeds (125 GB/s, duplex). That means it either needs to be on-die or like HBM - either of which gets expensive.

To minimize the amount of memory needed, they have an incentive to do as little buffering as possible. I'd wager most or all of the backbone equipment operates in cut-through mode, rather than store-and-forward. It's not only lower-cost and lower-latency, but also more energy-efficient.
 
  • Like
Reactions: purpleduggy
Status
Not open for further replies.