The chips are intended for workloads that benefit more from extra cores than faster cores. Those usually have tight data locality and don't depend much on memory performance
I think quite the opposite. When you decrease single-thread performance (either by clockspeed, IPC, or both), it has the effect of
reducing software-visible latencies!
That's actually one reason why lower-clocked, lower-IPC cores should scale
better than their big brothers. But, it only gets you so far. Maybe a factor of 2, at the outside.
such as most crypto algorithms which have working data sets only a few kiB in size apart from the data being (de)crypted or hashed, easily fits multiples in L2$.
Name a crypto currency which fits that profile and isn't already being dominated by ASICs.
AFAIK, people generally don't use CPUs for crypto, period. And especially not pricey server CPUs.
I'm sure there are AI models that would scale well with this too.
No. AI is
massively bandwidth-intensive and nearly all models in popular use are too big to reside in L2 or even L3 cache. That's the whole reason you see Nvidia, AMD, and Intel delopying HBM in their top server-oriented accelerators/"GPUs". Furthermore, AI gains a decent amount of benefit from AVX-512 and AMX. Have a close look at Intel's marketing materials for the HBM-equipped Xeon Max, which seems heavily targeted at AI.
What workloads Sierra Forest seems to target are traditional transaction-oriented server apps, like databases, web servers, media streaming, etc.