Well you should read better. I didn't say CPU instruction sets, I said instruction sets. I said this specifically because I was addressing a complaint that Intel CPUs were hard to code for because the instruction set was a mess
This is an article about x86, though. Trying to drag the subject of iGPUs into the discussion is not only irrelevant but also wrong.
It's wrong because Intel changed their iGPU instruction set between Gen 9 and Gen 11, and then again with Gen 12. It's also irrelevant, because games don't generally contain native GPU code, but instead contain Direct3D HLSL shader code that gets compiled at runtime.
You should really understand a conversation and it's context before opening your mouth and shoving your foot in it.
You saying that to
me =
🤣
Ahh! Why are people so dumb! You can specify in programming what encryption is used, and different CPUs have different availability of encryption sets based on if a security processor is present and what instruction sets it uses. And this is different across market segments!
This is an article about x86 ISA differences. The security processor has nothing to do with that. What's "dumb" is to muddle a straight-forward discussion of x86 ISA with discussion of all the other hardware engines a CPU might or might not have, because they don't interact with programs in the same (or even a meaningful) way.
What
@NinoPino seemed concerned with was application developers having to support their programs on different CPUs, and the issues posed by Intel's x86 ISA shenanigans. These userspace programs don't have any direct interactions with the security processor - that's all handled by the OS, drivers, and BIOS. Therefore, it's not relevant to the discussion.
If you think someone else is dumb for not agreeing with you, maybe the problem is that you're just missing something they're not.
*headdesk* Why? Why would you say something this dumb and untrue! The different cores have different AVX instruction sets. That's just a fact.
The key word is "support". They implemented, but don't
support AVX-512 on P-cores in Gen 12+ CPUs. They also said they didn't validate it, which means there could be bugs affecting those who have early CPU revs and motherboards which let them enable it.
BTW, I think you need to learn the distinction between "dumb" and "ignorant". See also: arrogant.
AVX on Zen 1 is a different instruction set than AVX on Zen 2. AVX2 on Zen 1 is a completely different instruction set from AVX2 on Zen2. And so on and so on. It's the same for the Intel side because the AVX series of instruction sets doesn't do optional features! Any additional changes create a whole new instruction set like SSE version numbers were different instruction sets.
You're confused. Note that I say "confused" and not "dumb".
AVX introduced a set of instructions which are present on every CPU advertising the feature in its CPUID flags. Likewise, AVX2 did the same. If you take a piece of software which uses AVX or AVX2, then it'll run on any CPU which advertises AVX or AVX2 support, respectively. It's really that simple.
And Zen4 supports a very specific set of AVX-512 instructions but not others!
Zen4 implemented all AVX-512 subsets supported by Ice Lake, and then added Cooper Lake's BF16 instructions. Compared with Gen 12, the main things it's lacking are VP2INTERSECT and FP16. The former is a single instruction that you can easily emulate with a handful of others, while the latter is of limited value, given they already have BF16. Intel's Gen 12 P-core was the first to support either.
en.wikipedia.org
In other words, no. Zen4 isn't terribly lacking in the completeness of its AVX-512 support.
not all AVX2 supporting CPUs run all of the AVX2 instructions!
Since you're repeating this claim, I'm sure you'll be able to provide
specific examples, should you decide to double-down on it.
This is even more true with AVX-512, and yes much worse on Intel's side because they have had AVX-512 longer.
What makes AVX-512 different is that they created distinct CPUID flags for each of the subsets. So, while we can
talk about AVX-512 like it's one thing, a program has to actually check what subsets the CPU supports, in order to ensure binary compatibility.
because of this, Developers have just tended to stick with AVX series instructions supported across the whole spectrum with some going so far as to do different compiles for different CPUs generations,
Until Gen 11, there really wasn't much benefit to supporting any AVX-512, in non-server/workstation programs, because none of the client CPUs had it.
The other big problem facing early adopters of AVX-512 is how inefficient it ran on 14 nm CPUs. This manifested as clock-throttling, in server CPUs, and very high power-usage in client CPUs. With the latest generation of CPUs to implement it, these downsides are nearly gone.
What AVX10 does is unify the whole AVX, AVX2, and AVX-512 instruction set
Eh, not more than AVX-512 already did. It could already operate on 128-bit and 256-bit operands, as well as 512-bit ones, so AVX10 really isn't changing anything, there. Nor is it deprecating AVX or AVX2. Essentially, all it's doing is saying:
- prior AVX-512 subsets are now subsumed into a single feature.
- 512-bit operand support is now optional.
Beyond that, there are a couple, minor changes it makes in flag-handling and (IIRC) load/store instructions vs. AVX-512.
The first AVX10 specification is aimed at getting the individual AVX types to unify going forward
AVX10.1 is virtually just a CPUID-level change. Really minor stuff.
a dev doesn't have to care at all about what AVX series is available on the processor because the CPU will sort it out for them by making sure that AVX-512 instructions go only to the 512 units and AVX2 instructions go to AVX2 compatible units.
Intel has stated that hybrid client CPUs
will not support different widths of AVX10 on different cores. AVX10/512 will be limited to P-core only CPUs.
Here's the quote:
A “converged” version of Intel AVX10 with maximum vector lengths of 256 bits and 32-bit opmask registers will be supported across all Intel processors, while
512-bit vector registers and 64-bit opmasks will continue to be supported on some P-core processors.
Source: https://cdrdv2-public.intel.com/784267/355989-intel-avx10-spec.pdf
BTW, it's not possible for the CPU to steer instructions to one core or another - that would need to be handled by the OS. All the CPU can do is generate a NMI, when an unsupported instruction was executed on a given core. And any scheme you come up with for having the OS steer threads to different cores, based on using unsupported instructions, has pitfalls and downsides that make it a problematic solution, at best.
The compilers and the CPUs will also have fall back instructions built in if the code calls for a feature that's not available. And in theory this is all supposed to be transparent to both the Dev and the End User.
You can't have a fallback for using 512-bit operands, if the CPU doesn't implement 512-bit registers.