I think you are under the assumption that new would be cheaper? Not at first. Establishing new standards takes capital and the expectations of losses.
Logistical needs and fulfillment take a long time to establish and optimize. There is enough momentum behind silicon based lithography and TSMC/Samsung/Intel that you will be seeing this stuff for decades.
ARM is simply a different way to design a CPU. They are not necessarily significantly cheaper then cheap x86 offerings. They are cheap because they are mass produced to an extreme degree. The top end ARM processors are certainly larger and more expensive, but the older ARM cores are still being made, as the process nodes shrinks, it gets cheaper per chip, so they can be sold at very low prices. In effect a combination of old design and new manufacturing is the most cost effective.
Switching to something like optical computing would mean a whole change in chemical and physical construction process. New production lines, new sources of supply, and you have to convince everyone they are worth it over existing solutions. High speed optical computing is still a ways off, there have been promising lab demos for a while, but nothing in the scale of mass production. Most recent information shows a way to build logic into crystal structures that attenuate lasers in different ways, sensors on the other side record the results. So the concept is good. Nanoscale lasers were invented about a decade ago. All coming together, but still going to be a while.
Lasers are integrated (essential) into computing in the form of fiber optics. More directly opto-isolators can be found in circuits all the time. This lets you separate electrical circuits and still pass data or commands through or for sensing.
Quantum computing is still energy intensive and non-portable, so not a good solution. Quantum will likely make its way to major chunks of the internet before it makes it into personal devices. Still decades off. That would mostly be for security, and not actual end user computation. There are very few tasks a quantum computer can perform well that will translate to end-user performance in software.
Carbon nanotubes / graphene are also promising, with P and N type materials possible just by different twists in the structure or doping graphene sheets with specific elements, or the coolest one, two graphene sheets laid on top each other at different angles. Mass production is the main problem. Right now it is all about inducing nanotubes to form, locating them with a electron microscope, analyzing their properties, harvesting, applying to an experiment, and testing. Harvesting graphene sheets and arranging them in the same way. They can't just lay out what structures they want to make like lithography allows, so anything functional made with nanotubes and graphene is basically a one off. Many decades away from that becoming common, they have to first build the tools to allow construction, Most nanotube use you see today is in the form of its more physical properties,
RISC (ARM, PowerPC, RISC-V, etc) and CISC (x86-64, x86) are basically the only options right now. The software exists for them, so transitioning to a an easier to build design would take a lot of effort. There are some off the wall ideas out there like zero instruction set computing, which should eliminate the need for machine code, but that is more down the path of machine learning where the CPU basically tries until it gets the correct answer, optimizes, and then locks that in. Useful for certain tasks and applications, but not so good for general computing. Most of the rest are just variations, or the answer is again RISC or CISC in slightly different variants.
The way things are going, it is going to be tighter integration that is the path forward for now. Chiplets and universal substrates, die stacking, should make for extremely dense portable electronics.