None of that proves it's not suffering from being single-channel. I'll grant that with a 6 W TDP, it's going to matter less than the higher-TDP and 8-core models.
I've been operating N5005 with 32GB in dual channel (Intel "permitted" 8GB) and N6005 with 64GB (Intel "permitted" 16GB) in dual-channel without issue using Kingston Value DDR4 SO-DIMMs on ASrock Mini-ITX (N5005) and Intel NUC11 (N6005).
I've also had various previous Atoms down to a J1900 on DDR3 running with 16GB, when Intel "permitted" max 4GB.
While Braswell to Jasper Lake supported dual channel, that made next to no difference on all except Jasper Lake: JL was the only SoC to have measurably almost twice the RAM bandwidth with both channels, but it was still
half what the same RAM sticks would deliver on a Tiger-Lake NUC.
Very roughly using DDR4-3200 SO-DIMMs Kingston Value RAM SO-DIMMs I had 10GB/s on JL single, 20GB/S on JL dual and Tiger-Lake single and 40GB/s on Tiger-Lake (i7-1167G5) dual.
On the N5005 GoldMont+ the difference between single and dual channel DDR4 was more like 10 vs 12GB/s.
I didn't do extensive iGPU tests on the JL, but I do remember that it did quite ok on 4k. For me the main benchmark to qualify a "desktop" is to have Google Maps render in 3D Globe view on a Chrome based browser: if that allows smooth turning with the Ctrl-key held and the mouse to move, it's ok in my book. There the N6005 felt like twice the speed of the N5005 and quite snappy.
It regularly puts Microsoft's flight simulator to shame and the results on these recent Atoms are quite awe-inspiring.
And it's a test that just killed my RP4 8GB, even at THD etc.
It's way better on my Jetson Nano with only 4GB, but that has an iGPU to match, while the CPU cores are worse.
I was tempted to buy an RP5 just for 'completeness' but that iGPU can't be anywhere near acceptable with only a 300MHz speed bump.
Intel's Braswell J1900 was probably as bad as the RP4 in terms of iGPU, perhaps even worse. I remember it being painful even at 2D in THD. But they've pushed the GPUs much harder than the CPUs on subsequent generations, at least until JL.
I have Gracemonts as E-cores in my Alder-Lake i7 systems and I've always wanted to bench them there using numactl on Linux at various PL1/PL2 settings which fortunately can be set via the command line on Linux.
It won't exactly replicate the N305 because the memory interface would still be dual-channel (ok, that could be 'fixed') nor can I infer N305 iGPU performance from a 96EU Xe on the i7.
The way to know for sure would be to take an Alder Lake-S and set a custom frequency limit on the E-cores which matches what Alder Lake-N could achieve. Then, use processor affinity to run a benchmark on either 4 or 8 of the E-cores and repeat with one and two memory channels populated.
Yes, I've been planning a detailed benchmark list like that, too, with lots of permutations on core counts and PLx settings, also to validate Intel's claim that at certain Wattage points the E-cores deliver more power than P-cores.
Basically I'm trying to test optimizing the infra to a given transactional load.
Now I just need to find the time.
Although I don't think I'd need to set frequency limits, as I believe Gracemont cores max frequencies would be the same or very close between the N305 and the i7. Just making sure via numactl that your benchmark is only using the E-cores should be enough.
You can't eliminate the kernel using the P-cores, nor can the P-cores be deactiviated completely (1 must be active to boot, only E-cores can be turned off entirely), but a userland benchmark shouldn't be exercising the kernel much.
Of course, my AD i7 have the 96EU Xe iGPU and that is likely to suffer badly from a single channel. But as I've mentioned above, Atoms so far haven't gotten anywhere near the performance from the very same RAM sticks that Gen8-11 (P-) cores were able to get from them. And actually I wonder if and how much memory bandwidth on E vs P cores actually differs in hybrid CPUs.
Generally I'd say that for anything but very synthetic workloads (or gene sequencing) the DRAM bandwidth won't be much of a problem on CPU workloads, since even Atoms have loads of cache these days.
For the GPU I am a lot less optimistic, because at least on Jasper Lake GPU perfomance took a very noticeable beating with only a single stick.
I think the main reason Intel ditched the second channel was that they saw lots of customers implementing single-channel designs anyhow, and decided to cut the costs of having a second channel that mostly sits unused. It's not that the second channel wasn't beneficial, just that most customers use these SoCs for low-performance devices and care more about cost savings than the extra performance.
I'm much more inclined to think that Intel wanted to protect its market segmentation.
First of all: I cannot believe that Intel produces different dies for the N50-305 range. The core section in these chips are tiny and if they failed at any significant rates, Intel could not afford to throw them as E-cores into their big i5-i9 SoCs, as a single broken Atom E-core could reduce an i9 to an i5 on perhaps perfectly good P-cores.
There is roughly 4 E-cores to a single P-core on the bigger chips and that means a failed E-core has roughly 1/4 the probability of failure.
Actually with the much smaller Atom-only vs. big Hybrid dice chances of a sub-optimal die are much lower.
Intel isn't talking, but I actually believe that they are violating a court ruling from California, which prohibits them from selling intentionally castrated hardware.
Then single channel doesn't mean single socket as any desktop board will readily attest. Even if Intel only implemented single channel memory on the Alder Lake Atoms, there should be no reason why vendors couldn't add a second socket to easily enable 64GB of RAM even with DDR4.
But evidently Intel told them in no uncertain terms, not to do that. Because it would eat into low-end server territory, for which Intel wants to protect high margins. It's not an accident that Intel has quoted far less RAM capacity for Atoms than those could support for generations. As far as Intel is concerned, nobody should be permitted to have 64GB of RAM and perhaps 10 VMs running on a €100 mainboard!
Which is exactly what I've been doing for years now.
This is not accurate. The N6005 (Jasper Lake) supports dual-channel (128-bit) DDR4-2933. That has a nominal bandwidth of 46.9 GB/s. Compare that to Alder Lake-N's max (64-bit) DDR5-4800's 38.4 GB/s and tell me which number is bigger. Plus, DDR5 has higher latency, which is a slight downside of going that route.
Nominal isn't what you get. 22GB/s is what I measured on my N6005 with dual channel DDR4-3200, running at 2933 (Kingston Value SO-DIMM RAMs tend to have matching SPD settings for all speeds down to 2400). The Tiger Lakes do match that bandwidth at 3200.
I'm perhaps going out on a limb here, but:
Anything x86 sold at that price point near €100 doesn't make money to anyone. Not Intel, who is using these chips mostly to keep ARM out from that market, not the OEMs who are forced by Intel to create these boards and systems if they also want the big chips where at least some money is made.
Intel produces a single die at relatively low cost and near perfect yields but still at a loss (or far from their usual margins) except perhaps for those they sell as eight-cores, only to splatter the 'competitive landscape' with products, so no ARM small home server eco-system can grow.
Firefly selling a Rockchip ARM ITX-3588J 8K AI Mini-ITX with 32GB of RAM at $800 is obviously a prohibitive price owed to the need to make a profit at tiny scales. It shows that Intel's low-end sea wall is holding, even without AMD adding sandbags.
To consumers Atoms obviously represent a much better value than what ARM vendors can deliver at this price point of just above $100, because those they need to be profitable with what they sell.
Qualcom, Mediatek and Samsung could change that, offering older or lesser mobile phone SoCs as a µ-server base, but for them, there is also no money and lots of effort in that market. They'd much rather clips your money bag selling that as 'robotic' chips or put extra chips into a landfill.
I'd just love to see die shots from N50 or N100 chips compared to N305, because I am so sure they are in fact eight core chips that have been fused into cripples, that I'd bet a case of beer on it (but not the international delivery).
I guess even more I'd love someone to hack their firmware and turn N50 chips into fully fledged N305, just in case they chose soft fusing to save money.
Even when it comes to dual and [inline] ECC channel support, I'm not sure it's left out from the die, there is a much better chance it's not wired to the BGA pins. Because the extra logic required may be so small, that just doing a cut & paste from the dual channel IP block that they use on hybrid SoCs or Grand Ridge may be cheaper than doing a separate single channel design.