I think neuromorphic chips with spiking neural network architectures will adapt extremely well to 3D, since they don't require full activity at all times and intrinsically have low power consumption. And what do you know... real brains are also 3D.I don't know about orders of magnitude, but I'm no expert. It seems to me like power-density and fabrication costs are going to be very big challenges.
New materials could be a game changer, though. That's where I'm pinning most of my hopes. Although, eh... I don't know if I really care that much whether computers get any faster or not.
But new materials may help classical computers make that transition, and there will be obvious optimizations, such as further shrinking the distance between logic and memory. Based on projections of carbon nanotube-based 3DSoC, a 100-1000x improvement in performance is plausible.
On what to do with such performance, we may see previously unfathomable new applications emerge. But if you don't care about raising the bar, you'll probably like lowering the bar, e.g. putting the performance of today's top-of-the-line workstations inside a Pi Zero-like chip.
I get the sentiment about not caring about computers getting faster, but it applies better to our current situation. A Ryzen 9 9950X is about 90% faster than a Core i7-6700K in ST, and 7.4x in MT (PassMark). It's a good improvement, but not life-changing, especially in anything not multi-threaded, and you can easily use that old processor for almost everything people do today. Having said that, we probably won't see sudden huge gains out of nowhere without fierce competition, and we probably shouldn't expect monolithic 3D consumer chips before 2040.