If the predicting benchmark isn't sufficiently general then how is it supposed to make any useful predictions?
Whether intentionally or not, you're playing a shell game with words, in order to make a claim unrelated to mine, while making it sound like we're talking about the same thing.
A benchmark exists to provide a prediction of how a system will perform similar tasks. What you're talking about is the
compiler making predictions, except the compiler isn't even doing that much, since the benchmark is deterministic. What you're spinning as a virtue is actually no such thing.
It's like this: if a student successfully studies for an exam, they will have reviewed the relevant material by predicting what the test will cover and learning it. That would be a legitimate use of the term. What Intel did was basically have the compiler
memorize the correct answers, so that it can do well on the test, but without understanding the material or being able to work through the problems. If you present it with a slightly different test that covers similar material, it wouldn't do nearly so well, because it doesn't actually understand how to solve the problems very well.
In other words if it is playable/cheatable then it's a bad benchmark.
This is a false and arbitrary standard. Plenty of real world workloads are cheatable. To somehow make the benchmark not cheatable would change its fundamental nature. It would be like saying that we're going to deal with athletes taking steroids by completely removing athletics from the competition and making them compete at chess, instead.
The solution isn't to try to bend over backwards to make it difficult to cheat. Don't compromise the integrity or predictive power of the benchmark, but rather catch & punish cheaters.
For anyone who doesn't regularly follow these forums, they should be aware of your
100% perfect record supporting & defending Intel, while bashing their competitors.