Unless they both happen to have the same compiler, I think there's certainly room for performance differences. Even though it's called "asm", it still uses an abstract machine model and virtual registers. It's not really that much lower-level than C which, after all, was designed to be a portable substitute for assembly language. Therefore, plenty of room exists for WASM backends to do optimizations.
Pretty sure it
is the same compiler, inside browsers and everywhere else, because everybody is sharing the same code base.
I'd prefer data points over a fight, so if you could just do both?
The N6005 has Gen11, whereas Alder Lake-N has a second gen Xe iGPU. Xe includes AV1 decode hardware, Gen11 doesn't.
On Tiger Lake Xe was able to pull off enormous improvements vs. previous HD iGPUs. The GT3e iGPU on my i7-8559U with 48EU and 128MB eDRAM on my NUC8 only delivered about 50% improvement over the 24EU iGPU on the NUC10 with a "normal" iGPU on the i7-10710U using twice the resources, and indicating just how difficult it was to speed up the graphics with DRAM bandwidth, even with eDRAM!
Yet the
Tiger Lake NUC11 (Tiger Lake and Jasper Lake were NUC11 with Intel...) with 96EU and without any eDRAM help gave linear performance improvements, 4x the NUC10 iGPU, which seemed pretty near impossible!
The solution is something that's been used also for recent dGPUs iterations: a relatively large last level SRAM cache dedicated for iGPU use, inspired by the eDRAM, but both much smaller capacity yet way faster bandwidth and lower latency.
The 10nm process shrink was able to pay for that on the big and expensive Core SoCs.
But I would doubt that the AD Atoms iGPU replicate that, because SRAM eats 6-8 very hard to shrink transistors per bit, something Intel might accept in a full SoC with P-cores, but doesn't really fit to an Atom use case, which is all built around small cheap dies. With single channel DRAM, only 24 vs 96 EUs they might have gone easy on the SRAM.
Media encoding IP blocks aren't big by today's standards and shrink well. They also ultra important for the critical electronic signage use cases, were Atoms are popular.
I'm not going to fight over Jasper Lake vs. AD Atom GPU performance, I'll just say that both the GCN and the RDNA3 AMD APUs are in Tiger Lake 96EU territory.
...because that's important for your job, or why exactly?
Mostly personal, but somewhat reaonable, I'd say.
My principal desktop setup is two 42" 4k screens, with a cascade of KVMs to connect everything not mobile. Switching resolutions as well as screens creates a mess far worse than just switching screens, so 4k at 60Hz is a strong baseline, 144Hz and HDR what I use on the main VR panel, 60Hz without HDR on the secondary IPS, which I use mostly when I measure or want to monitor a secondary system. I'd say it's a single monitor setup with an easy option.
If I game, I game on gaming rigs, got plenty of those, so I don't have to squeeze that into every notebook or NUC. But a competent 4k with a 60Hz minimum includes video at the same resolution, fluid scrolling across large PDFs and web sites and just enough 3D to satisfy the info-worker.
And Google's 3D maps are about the best optimized 3D graphics presentation there is. It is so thoroughly optimized there is just no excuse not to do it, because it runs really well even at 4k on something as lowly as the Orange PI5 and a Jasper Lake Atom. Certainly every mobile phone from the last few years could do it, even mid-range ones, if only because it even runs on my Jetson Nano and that's a crippled Switch 1.
At THD or 1440 I'm pretty sure it even runs on a Sandy Bridge HD3000.
And then I just use it a lot, because I manage things spatially in my head.
I'm just asking, because I used a Sandybridge iGPU on a 1440p screen for many years and it was totally fine for everything non-3D I wanted to do. And that's like stone age tech, compared to modern iGPUs and memory bandwidth.
I think it depends on the settings you use. My N97 is disproportionately faster on my 1440p screen than my i5-1250P is on a 4k screen, in spite of having 4x the shaders and double the memory bandwidth. Probably a more significant difference is the OS and graphics backend. The N97 is using Linux and either OpenGL or Vulkan, whereas the i5 is my work machine and runs Windows, therefore most likely using a DX12 backend with different features and such.
The jump from 1440 to 4k more than doubles the pixels. It's a cut off point for quite a lot of graphics hardware, which does pretty well with the former, yet fails with the latter: there is a reason 1440p is one of the most popular gaming resolutions today.
If you ran both on the same screen, the i5 should do a lot better, not even Windows could slow it down.
But then the overhead of corporate watchdog software is hard to overestimate, I still have one of those as well and it's not only a dog, but still a 10th gen quad core.
Nonsense. My 32GB DIMM is already dual-rank and these CPUs don't support quad-rank. In fact, I'm quite sure quad-rank DDR5 UDIMMs aren't even a thing. It will use JEDEC timings, just like my 32 GB DIMM does, because these SoCs don't support XMP or overclocking.
I would have said the same, especially since quad rank isn't even officially supported by Zen.
But I believe this came from Wendel, who's pushing for 256GB on the desktop, so whatever the issue, I'd advise caution and testing.
What enables 64 GB DIMMs is the density of the available DRAM chips. When DDR5 launched, the only thing on the market were 16 Gb chips. That meant you could get 16 GB DIMMs with 8 chips (i.e. single-rank DIMMs) or 32 GB DIMMs with 16 chips (i.e. dual-rank DIMMs). Next came 24 Gb chips, leading to either 24 GB or 48 GB DIMMs. Now, 32 Gb chips are starting to hit the market.
Let's test and see, I never mind extra RAM.
You mean "value"? IMO, the path of virtue should mean they do support ECC, which they don't.
Missing an "f" there: I meant to say that AMD went
off the path of virtue
Well, the Pro APUs can do it. So, it's not a hardware limitation, but rather artificial market segmentation.
I've not been able to find out for a long time. These APUs come in different BGA form factors and there seems to be a correlation between that and ECC support. And of course the mainboard needs to support the traces from the SO-DIMM sockets, which might not be there etc. when that saves half a cent.
And yes, I've hated it for the marget segmentation nature, when AMD was very loudly advertising how they
didn't do that with the original Zen launch.
Same with per-VM encryption and other potentially interesting things present in the harware but only available for Pro and EPYC customers.
And of course my Thinkpad has a Pro APU, but soldered RAM without the extra chips for parity...
DRAM degrades with age/use. The longer it stays in service, the greater the likelihood. And by the time you actually notice, it could be way too late. If you value your data, the least you could do is periodically run at least two passes of memtest.
That's been very much impressed on me by my former Bull colleagues who developed their HPC harware, too.
At my job, I've seen lots of instances of machines that have been in service, for several years, developing instability (i.e. if they're not using ECC DRAM) or starting to log ECC errors (i.e. for those which do have ECC DRAM). We even had one server, that was in service for about a decade, with 3 out of 4 DIMMs reporting ECC errors and one of them even reporting multi-bit (i.e. uncorrectible) errors! Since it was not a machine being subject to regular monitoring, the only way it came to my attention was due to instability. Apps which triggered a multi-bit ECC error would fail with SIGBUS
.
I know, I know and I try to contain the risk, and live with what's left.
But I am also driving way too fast, when the Autobahn is empty (150mph), Good thing it's rare...
They enabled it, but the user community had to ask for it. They were simply unaware of this capability, but subsequently enabled it without any reluctance, other than trying to figure out how to test it. They gave no indication that it cost them anything.
I've seen some BIOSes so bad, I'm pretty sure they were obtained without a proper support contract...
And the Intel we know and love doesn't give anything away for free, that they can charge money for.
Who can you trust?
A simple test that some people do is to overclock their memory until they see ECC errors getting logged. Unfortunately, Alder Lake-N doesn't support overclocking. Another option is to use a known-bad DIMM.
There is ways to test and validate that without maintaining a faulty DIMM, ECC can be switched off and on. It's just that the type of information you need to configure these things aren't available without NDA, which is I guess why we don't see this in open source.
And you could try Row Hammer to see if it triggers ECC. Because the guys who "invented" Row Hammer told me that not even ECC DIMMs protect against modern variants of it, it just takes a little longer...
I think I've seen the PassMark version of Memtest86 mention ECC error insertion, but not all chipsets are supported. My version is a little out-of-date.
That's what led me to say what I said above: if you have the documentation and matching hardware, it can be done in software.
And then you have to trust PassMark to do it properly...