Regarding "AI", things changed in 2017 with implementation of transformers. That was the turning point that started the current revolution.
There's a great Sora demo where they visually show the effect of more compute power. It's a video of a puppy that not coherent - hardly recognizable, but as the increase the compute available to network, the puppy becomes more realistic until at 8x it's easy to mistake it for reality. It's a great way to understand why people believe increasing power will increase usefulness.
AI today can't reason very well, and can't plan responses. Matthew Berman has some great tests that show this. Ask any current LLM how many words are in it's reply to your question. It won't be able to plan the response well enough to predict that. Ask it to produce 10 sentences that end with the word apple. Can't do it. Heck even try mathematics that require multiple steps to complete and many will falter - something any cell phone app can do with ease.
Modern models are quantized (truncated) down as far as using 2 bits to store values, even the biggest models running on massive clusters rarely go beyond 16 bits of precision because of the lack of memory and compute. If we could run those models at 64bit precision (easy to envision) quite a lot would change.
Smarter that a human at logical reasoning? Easy to get there. Better that everything that makes us human? Not likely.