I can never figure out from the press coverage just what is what. Exactly what applications work best on these high core count servers? Intel *could* always match core count, but there was no good reason to, too many cores just cause contention and blocking and cache overload and IO queues, not to mention core licensing issues.
But AMD went there more for marketing hype than any real benefit, and now Intel has been dragged into it too. Or at least that's how it looks to me.
Now, in SQL Server you might benefit from a bunch of cores (if you can afford the license, or Azure level), but the way it works is lots of small queries only need one core each, but some big queries can run a lot faster if they are free to grab 4 or 8 or 32 cores for a few seconds or minutes. So the optimal situation is to have a bunch of cores that sit idle 50-80% of the time and are only used for some big (and mostly sloppy) queries. But Microsoft's licenses used to require paying for them linearly - as if they were going to be used 100% of the time. So Microsoft suppressed demand for high core counts on servers from about Y2K until I'm not sure when - are they still doing that, or have they reintroduced a per-processor license that maxes out at 10 or 16 or something?
SMH