Thanks for the news.
Yes traditionally, useful quantum computation can’t be done without fault tolerance, however, IBM’s paper does provide an important data point that demonstrates current quantum computers can provide value much sooner than expected by using error mitigation ( a total of 2,880 CNOT gates ? ).
Glad they focused on the "quantum Ising model" and tensor network methods, and the Pauli–Lindblad noise model for noise shaping in ZNE. And also shifting from Clifford to non-Clifford gates also helped provide a comparison between the quantum solution and the classical solution. But there are other techniques as well.
Actually, once QEC is attained, building fault-tolerant quantum machines running millions of qubits in a quantum-centric supercomputing environment, wouldn't be a far cry. It's about time we reach the ‘utility-scale' industry threshold.
Unfortunately, most claims about quantum advantage are usually based on either
random circuit sampling or
Gaussian boson sampling, but they are not considered to be as useful of an application, and there have been no useful applications demonstrating quantum advantage, since QCs are too error prone and too small.
Noise leads to errors, and uncorrected errors limit the number of qubits we can incorporate in circuit, which in turn limits the algorithm's complexity. Clearly error control is important.
We can also agree that realizing the full potential of quantum computers, like running
Shor’s algorithm for factoring large numbers into primes, will surely require error correction.
IBM has actually done more error mitigation research than others. It's current roadmap shows a more detailed focus on error mitigation beginning in 2024 and leading to fault tolerance thence afterwards.
By the way, IBM isn’t claiming that any specific calculation tested on the Eagle processor exceeded the abilities of classical computers. Other classical methods may soon return MORE correct answers for the calculation IBM was testing.