AMD's upcoming Zen 5-based processors to increase microcode size by three times.
AMD Set to Substantially Increase Microcode Size of Future CPUs : Read more
AMD Set to Substantially Increase Microcode Size of Future CPUs : Read more
Plenty of room for future malware to be installed?
If a program or hacker managed to update the microcode in your CPU, it's game over. Even with the current size limits.Plenty of room for future malware to be installed?
Maybe if somebody found a way to alter the C:>Windows>System32>mcupdate_AuthenticAMD.dll file that rewrites the microcode your system is using at windows boot?How exactly?
Yeah, but if a hacker gets root, you're hosed.Maybe if somebody found a way to alter the C:>Windows>System32>mcupdate_AuthenticAMD.dll file that rewrites the microcode your system is using at windows boot?
You are right that the updates aren't persistent across reboots and don't stick with the CPU in any way other than the current windows session.Yeah, but if a hacker gets root, you're hosed.
AFAIK, microcode updates aren't persisted in the CPU across boots, so it's not a place where someone could hide a root kit. Therefore, it's no more special than tons of other exploits someone could do with admin privileges.
I assume the loader does an integrity check on the payload, but that's probably more to protect against unintended corruption.The thing that would strike me as really bad is if the OS blindly trusts any and all microcode updates. That is, if there's no digital signage on the microcode update itself that either the OS or CPU could check, then I'm going to have to face palm really hard.
Yeah, the embargo lifted on AMD's Bergamo (128-core server CPU), Wednesday morning. Other sites already have benchmarks up.that zen4c look interesting... when ?
I don't think you'd need to encrypt it, it just needs to be signed. Then the motherboard FW or OS (depending on where the microcode is being loaded from) can verify the signature. Could be signed by the CPU maker, mobo maker, OS maker, or some combination thereof.I assume the loader does an integrity check on the payload, but that's probably more to protect against unintended corruption.
They could encrypt it with a key that only the CPU manufacturer knows, and then have the CPU decrypt the microcode as it's loaded, but then if the manufacturer ever went out of business (or just flat out refused to issues microcode fixes for old CPUs, as I think Intel did with some of the side-channel vulnerabilities), it that would prevent anyone else from offering their own microcode fixes.
Yeah, the embargo lifted on AMD's Bergamo (128-core server CPU), Wednesday morning. Other sites already have benchmarks up.
Not only that, but Genoa + 3D V-Cache benchmarks are also out. It's interesting to see which workloads perform better with fewer cores + more L3 cache or more cores with less cache. Definitely some surprises, for me.
Probably unrelated to the article, but one configuration you forgot to enumerate was an iGPU tile. I was pretty sure AMD would go down that route, especially when they "solved" the bandwidth problem via Infinity Cache.
Wendell from Level1Techs brings up a good point, what if AMD decided to swap 1x of those CCD's on normal DeskTop Ryzen to counter Intel's "E-cores" strategy.
Then most of the power budget can go to the "Non-C cores CCD" to boost Frequency.
It would offer a interesting combo.
For Dual CCD Asymmetric Pairs you can create interesting combos:
1) X3D (Cache Optimized) + Regular CCD (Frequency Optimized) <- Optimized for Gaming & Dev Work
2) X3D (Cache Optimized) + C-cores CCD (Parallel Work Optimized) <- Optimized for Heavy Multi-Threading or Multi-Threading that takes up alot of Cache
3) Regular CCD (Frequency Optimized)+C-cores CCD (Parallel Work Optimized) <- Optimized for Heavy Parallel Work Loads or Parallel Work Loads that can benefit from High Frequency but limited work time.
If you create a 4x CCD Ryzen based platform (Let's call it Ryzen FX) that's targeted at the Prosumer/Light WorkStation market.
You can create a unique Assymetric setup for Developers who need to test different CCD configs.
1) Regular CCD (Frequency Optimized)
2) X3D CCD (Cache Optimized) CCD
3) C-cores CCD (Parallel Work Optimized)
4a) L4 $ SRAM CCD (Just dump in a large CCD that is pure SRAM and designed as a victim cache for L3$), this would help feed the other 3x CCD's.
4b) Put in a large FPGA CCD so developers can test if their code-path would work better with a FPGA
So far, AMD wants to leave the iGPU in the cIOD, they don't want to put it into the CCD.Probably unrelated to the article, but one configuration you forgot to enumerate was an iGPU tile. I was pretty sure AMD would go down that route, especially when they "solved" the bandwidth problem via Infinity Cache.
If you don't consider X3D CPU's a "Hybrid".Anyway, AMD's VP of Client Computing says AMD isn't planning on doing hybrid-core CPUs for the desktop. I guess he doesn't consider the half X3D CPUs hybrids.
Well, although I believe the throughput and latency of instructions is the same number of clock cycles between Zen 4 and 4C, I'm pretty sure the density-optimizations they did in the 4C mean its peak clock speed is even lower than the CCDs with the 3D V-Cache.If you don't consider X3D CPU's a "Hybrid".
Then Zen #C cores aren't a "Hybrid" either, it's just a different L3$ configuration.
It's not like a truely different architecture like what Intel is doing with P/E cores.
But Intel defines their "Hybrid-ness" by having actually different architectures with different Instruction set support. Whether that is Good/Bad is a question that is out of the scope of this thread we're on.Well, although I believe the throughput and latency of instructions is the same number of clock cycles between Zen 4 and 4C, I'm pretty sure the density-optimizations they did in the 4C mean its peak clock speed is even lower than the CCDs with the 3D V-Cache.
So, in that sense, there's a slightly stronger case to call them "hybrid", because their neither clock as high nor have extra L3 cache. In every way (other than efficiency), they're worse than the regular Zen 4 cores. They truly are "efficiency" cores, in that the main reason they'd be mixed with regular Zen 4 cores would be to improve efficiency.
Even so, the performance difference is clearly less than what we see between Intel's P and E cores, or else 128-core Begramo wouldn't be wiping the floor with 96-core Genoa, on so many benchmarks.
No, it's not published. The micro-ops in modern Intel CPUs are said to be RISC-like, but I've seen people claim that some of them are far too complex to qualify as true RISC. They certainly wouldn't use a licensed ISA, partly due to IP reasons, but also because the mapping from x86 would be inefficient in some respects. Plus, we know that high-performance ARM cores use micro-ops, as well. So, even that "RISC" ISA is apparently non-optimal for a CPU to natively execute.What is the architecture that runs this microcode? x86? arm? MIPS?
Microcode is micro-architecture specific. It can't even equate to an ISA because the software doesn't know it exists. So microcode from say an i7-13700K won't work for an i7-12700K or vice versa. Or if it does, horrible things will likely happen.What is the architecture that runs this microcode? x86? arm? MIPS?
That probably wasn't a great example, given how little changed between those generations. Plus, you have to be more specific, now that those CPUs contain different core types.Microcode is micro-architecture specific. It can't even equate to an ISA because the software doesn't know it exists. So microcode from say an i7-13700K won't work for an i7-12700K or vice versa. Or if it does, horrible things will likely happen.