I think back to two things.
One, we've already done this. IBM bet huge on RISC in the 1990's and lost miserably. RISC was the second coming of computing, the messiah, it was going to save us all!
Two, for quite some time the top supercomputer in the world was made by Fujitsu using ARM (RISC) chips. It was actually less efficient, by a small amount, than it's x86 competitors when averaged for compute.
The success of RISC and therefore ARM, in the last couple of decades has been solely based on "good enough and uses less power." The M1 and it's successors were possible only because we then had enough overhead in processing power to make emulation at least less painful. It's "good enough and uses less power." (And even better different, making benchmarks between the two murky at best.)
That said, the power optimization of x86 is an entirely new thing. It seems you can build a 20 hour laptop with an x86 chip if you put your mind to it.
So, I'm not really ready to believe this just yet.