More like late 2000's; there's a reason why Intel went to servers first, as they would benefit the most from an implicitly parallel architecture. Intel would then ramp clockspeed, get MSFT's server OS's to work out the architectural kinks, then eventually release to general consumers, probably around the time Windows 7 would have come around.
x86 and it's descendant architectures have been tapped out for a very long time, why do you think Intel keeps coming up with dedicated instructions to do specific operations faster? And now that die shrinks are basically coming to an end, there really aren't any more ways to squeeze performance out of it. And adding more cores is only efficient up to a point; nevermind you're still limited by how fast an individual core is capable of going.
Of course, now that x86-64 is entrenched and has legacy SW support behind it, we're stuck with it until some other architecture can emulate x86 and x86-64 in hardware, which is going to be damn impossible. We're stuck with it now, probably for a few decades to come.