News Insane DDR4 price spikes mean last-gen RAM loses its value luster versus DDR5 — prices have nearly tripled in just two months

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
According to the article, no it doesn't.
"For the first time in history, last-gen's DRAM has soared to over double the price of the current generation.​
...​
Never before have we seen the outgoing generation of DDR memory chips surge this high on their way out."​
Yeah thats nonsense.... the doubling always happens within a year it's just that goldfish memory has set it this time because clickbait. One day it's affordable and almost overnight you cannot justify the expenditure.

But I suppose it's slightly different between larger and smaller markets.
 
Well, I had just ordered another Ryzen 7 5825U based Mini-ITX board, after there had been another outbreak about Intel N350 based systems and how those were supposed to be good value... Truth is these "pure" Atoms haven't really been updated in a decade and bear no correlation to current E-cores.

And a Mini-ITX with an 8-core Ryzen, 2x2.5 Gbit, 8 SATA, 2 NVMe, a fan and still 8 PCIe lanes left for expansion at 15 Watt TDP was sold below €200 on Aliexpress, definitely the better value and a price that really didn't have much room for dipping lower, while those Atoms are still listed for much more, offering far less by any meaningful metric.

Did I mention that the Ryzen has dual channel DRAM with up to 64GB and unlike any Atom I've ever measured knows how to use that at full bandwidth even with decent graphics?

Anyhow, while that is slowly making its way through customs, I remembered that it uses DDR4...

I had read those alarms on DDR4 RAM prices for some time now, but any time I tried to validate that, DDR4 was still around €100/64GB of DDR4-3200 and thus cheaper than DDR5: many of these scares seem little more than marketing.

Anyhow, I decided to give it another go after ordering and found that indeed availability and prices had dramatically changed!

But one shop still had a Crucial set of 64GB DDR4-3200 at €100, and it just arrived a few minutes ago on my desk: that one was close!

It's really a shame, because there is tons of those Zens still out there at a price that insanely low for what you get in return, especially when you compare it to what they charge for Strix Point. Even Phoenix boards, which are much closer to Strix are selling at below €400, but are safer with DDR5.

There is less correlation between price and performance than ever and you really need to keep your eyes open for the good deals.
 
Well, I had just ordered another Ryzen 7 5825U based Mini-ITX board, after there had been another outbreak about Intel N350 based systems and how those were supposed to be good value... Truth is these "pure" Atoms haven't really been updated in a decade and bear no correlation to current E-cores.
"A decade"?? That's some serious time dilation, there! Twin Lake is Alder Lake-N with a minor clock bump. I have an Alder Lake N97 and it's quite good for the power it uses. Yes, it can use quite a lot of power under Turbo boosting and if you really hit the GPU hard, but I've done performance testing under strict power limits and it's still fast down to 6 W or so (self-reported package power).

So, it uses Gracemont cores that are just shy of 4 years old, at this point. Not a decade!!!

a Mini-ITX with an 8-core Ryzen, 2x2.5 Gbit, 8 SATA, 2 NVMe, a fan and still 8 PCIe lanes left for expansion at 15 Watt TDP
That's the SoC at 15W TDP, not the entire board.

The ODROID-H4 Ultra has 2.5 Gbps Ethernet, 4 SATA, and 1 NVMe. With all that enabled, it'll idle down to 6.2 W or 5.4 W, depending on whether the GUI is up. Enable ASPM and it goes down to 2.8 W.


Yeah, its 8 cores are still Gracemont, but there are a great many use cases where more powerful cores would just be overkill. IMO, Gracemont is still fast enough for a usable Linux desktop and still way faster than Raspberry Pi 5.

unlike any Atom I've ever measured knows how to use that at full bandwidth even with decent graphics?
Tell me what test you want me to run and I can let you know how much bandwidth it uses. My board is at DDR5-4800 (single DIMM, as you're aware).
 
  • Like
Reactions: thestryker
"A decade"?? That's some serious time dilation, there! Twin Lake is Alder Lake-N with a minor clock bump. I have an Alder Lake N97 and it's quite good for the power it uses. Yes, it can use quite a lot of power under Turbo boosting and if you really hit the GPU hard, but I've done performance testing under strict power limits and it's still fast down to 6 W or so (self-reported package power).
The last really significant change in Atom microarchitecture was the switch to an out-of-order design with Silvermont, which happened in 2013. Before the in-order design were probably completely resilient to Spectre type attacks, but also performance dogs.

I had/have N3700 and J1900 boards from that generation and notable improvements to the J5005 and N6005 I had next were really on the graphics side, notably the N6005 really just increased the power budget to achieve its better performance, it wasn't a true 10 Watt chip any more.

What hasn't made it into Atoms are the current E-cores, current ones are stuck with what went into Alder Lake.

So from a high-level point of view, I'd stick to the decade, especially when you compare to what you get for a similar price and power budget from AMD currently
So, it uses Gracemont cores that are just shy of 4 years old, at this point. Not a decade!!!


That's the SoC at 15W TDP, not the entire board.
Rest assured, I'll give it a proper evaluation and see how it compares to a Jasper Lake, which is the closest I have to an N350.
The ODROID-H4 Ultra has 2.5 Gbps Ethernet, 4 SATA, and 1 NVMe. With all that enabled, it'll idle down to 6.2 W or 5.4 W, depending on whether the GUI is up. Enable ASPM and it goes down to 2.8 W.
Yeah, its 8 cores are still Gracemont, but there are a great many use cases where more powerful cores would just be overkill. IMO, Gracemont is still fast enough for a usable Linux desktop and still way faster than Raspberry Pi 5.​

Tell me what test you want me to run and I can let you know how much bandwidth it uses. My board is at DDR5-4800 (single DIMM, as you're aware).
Yes, Jasper Lake was the first Atom to actually gain bandwidth from its dual channel DDR4 RAM ability, and then Intel went ahead and cut the 2nd channel from its DDR5 successor, which was why I never got one of those.

That's why I am attracted to the Zen3 APUs now, they are the same price range, yet basically 8 P-cores in terms of performance.

Whether they buy that with significantly more power is what I want to test, idle, peak and perhaps some iso-performance peak (compare a peak Jasper Lake load to what a Zen 3 needs for the same performance).

16 vs 8/9 PCIe lanes make x4 NVMe storage and 10Gbit networking possible and that costs quite a bit more on Intel.

For me it's a way to potentially upgrade four Mini-ITX J5005 equipped with 32GB of DDR4 with just a board swap, but last not least this allows me to retire the last crop of Intel i7 Kaby Lake systems I still nurse along for the extended family with something that is both faster yet much more modest in terms of power consumption: at €200 you can't go wrong ....if you could get DDR4 at reasonable prices.

Too bad those older systems are still on DDR3, although typically with 32GB of DDR3-2400, DDR4 was way too expensive back then...

And while I have SO-DIMM to DIMM adapters, I'd now need the reverse...
 
The last really significant change in Atom microarchitecture was the switch to an out-of-order design with Silvermont, which happened in 2013.
Since I don't know how you define "really significant", I can't directly refute this point.

However, the IPC of Gracemont is several times that of Silvermont. Like Tremont, Gracemont has dual 3-wide decoders, which is not something I'm aware of Intel doing, in any other line of CPUs. Unlike any of its predecessors, it has full AVX2-support. Its IPC is on par with Skylake, which is a massive leap.

In fact, perhaps Tremont was the most recent significant change in its lineage.

Here's some other interesting details about it, according to Chips & Cheese' analysis:
  • Similar branch predictor to Golden Cove; better than Skylake's and similar history length to Zen 2's. Better than Cortex-A78, as well.
  • Similar size branch target buffer to Zen 3.
  • Dual, 3-wide decode clusters can load-balance, effectively behaving like a 6-wide decoder! For loops that fit L1 cache, the decoders can sustain a throughput of 5 instructions per cycle.
  • Instruction reorder buffer equal to Zen 3's.
  • Larger integer register file than Zen 3.
  • More fp/vector registers than Zen 3.
  • Larger scheduler window than Zen 3.
  • First Intel E-core to feature L3 cache.
  • Faster L1D cache, in absolute terms, than Golden Cove's.
  • Similar sized L2 TLB to that of P-cores.

Finally, here's how they summarize it:

Conclusion​

Gracemont is a well balanced core with ample execution resources and reordering capacity. It doesn’t have any serious weaknesses beyond obvious tradeoffs that were made to keep power draw down, but even these sacrifices are well thought out and shouldn’t impact common integer loads. Just to summarize:

Gracemont’s strengths:
  • Well balanced out of order execution resources
  • Excellent branching performance (high reordering capacity, large zero bubble BTB, good prediction capabilities)
  • Large 64 KB L1 instruction cache
  • Low latency L1 data cache
  • Modest power consumption
Weaknesses:
  • L3 access suffers from high latency and low bandwidth
  • Lower vector throughput than other desktop x86 cores

Source: https://chipsandcheese.com/p/gracemont-revenge-of-the-atom-cores

So, equating Gracemont to Silvermont, just because both are OoO E-cores would really be a lot like equating Golden Cove to Sandybridge, just because both are P-cores with micro-op caches. Such facile analysis misses so much else!

notable improvements to the J5005 and N6005 I had next were really on the graphics side
And Alder Lake-N also improved quite a lot, there. My N97 has an iGPU 75% as big and just as capable as desktop Alder Lake. The iGPU in the N305 is the full thing! Alder Lake-N's media block supports hardware AV1 decoding!

notably the N6005 really just increased the power budget to achieve its better performance, it wasn't a true 10 Watt chip any more.
Because it turbo-boosts. However, you can set its PL1=PL2=10W and you'll probably find it performs nearly as good.

FWIW, what I've found with my N97 is that the PL1/PL2 settings apply mostly to the CPU cores and have no direct impact on the iGPU. Basically, if the iGPU + CPU cores are using more than the PL that's in effect, the CPUs get throttled but the GPU seems relatively unaffected. Maybe it's different, on Windows.

What hasn't made it into Atoms are the current E-cores, current ones are stuck with what went into Alder Lake.
Yes, because Twin Lake is still made on Intel 7.

So from a high-level point of view, I'd stick to the decade,
That's still very misguided.

Yes, Jasper Lake was the first Atom to actually gain bandwidth from its dual channel DDR4 RAM ability, and then Intel went ahead and cut the 2nd channel from its DDR5 successor, which was why I never got one of those.
So what? From the N6005 to the N305, you go from 47.0 GB/s to 38.4. What do you need that much for? The iGPU isn't good enough for gaming and that's plenty of bandwidth to keep 8 E-cores pretty well-fed. Plus, Alder Lake-N features a 6 MB L3 cache, in addition to 2MB of L2 cache for each quad-core cluster.

That's why I am attracted to the Zen3 APUs now, they are the same price range, yet basically 8 P-cores in terms of performance.
If you need the performance and don't mind burning more power, then sure.

Honestly, it doesn't bother me that you bought a Zen 3 board. What bothers me is your characterization of Gracemont that's basically an offense to CPU geeks, everywhere. It'd be like someone telling a car geek that direct-injection petrol engines are no different than carburated ones.

Whether they buy that with significantly more power is what I want to test, idle, peak and perhaps some iso-performance peak (compare a peak Jasper Lake load to what a Zen 3 needs for the same performance).
That's a pretty silly comparison. Tremont lacks AVX, its dual 3-wide decoders can only parallelize across different branch targets, it lacks L3 cache, and I'm sure it's generally a lot smaller in all of its internal dimensions. Also, it's made on the same inferior 10 nm node as Ice Lake was.

You ought to just look at benchmarks of N305's. The Hardkernel link I posted has those, as well as some pretty good power data. They measure at the wall, using their in-house designed power meter, which you can also buy. I have one - it's a very slick little product!
 
Last edited:
What hasn't made it into Atoms are the current E-cores, current ones are stuck with what went into Alder Lake.

So from a high-level point of view, I'd stick to the decade, especially when you compare to what you get for a similar price and power budget from AMD currently
You can certainly say "decade" but that would be false. ADL launched in Q4'21 which isn't even 4 years ago yet and the first ADL-N CPU launched in Q1'23. The first notable new E-core architecture since Gracemont was Skymont which first launched in LNL in Q3'24 so if we follow that same rough timeline one shouldn't expect a new E-core Atom/N series until the end of this year at the earliest. I'm assuming Intel will not be using TSMC N3B for these though so I wouldn't expect to see one until they have Intel 3 capacity available. I highly doubt this would be the case, but they could also potentially use the Compute Tiles from CWF for these parts, but then I wouldn't expect to see those until late next year.

I'd be surprised if that AMD CPU in operation was anywhere near as low power as the ADL-N systems run, but it does seem like a good deal price wise. The comparison with those AMD parts would actually be the 8505 which should run cheaper, if it's in stock, has more PCIe lanes and an increased EU count over the ADL-N.
16 vs 8/9 PCIe lanes make x4 NVMe storage and 10Gbit networking possible and that costs quite a bit more on Intel.
8505 ITX board with 1x NVMe PCIe 4.0 x4, 5x NVME PCIe 3.0 x1, 6x SATA, PCIe 4.0 x4 slot, 2x 2.5Gb 226-V with breakout cables for SATA and board for NVMe as well as CPU cooler for ~$300 including shipping (excluding discounts).

Plenty of Intel options have had 10Gb built in both in ADL-N and ADL/RPL form, but tariffs did a number on stock so much like the above 8505 board things are out of stock. It seems like the vast majority of ITX boards I'd looked at previously are just gone now except for the N100/N150 ones.
 
  • Like
Reactions: bit_user
My two AliExpress Ryzen Mini-ITX boards (8845HX and 5825U) are still stuck in customs (no tariffs issues, but too little manpower).

But I have two Lenovo notebooks, one Slim 7 13ACN05 with a Ryzen 7 5800U and a Thinkpad X13G4 with a Ryzen 7 PRO 7840U to represent Cezanne (Zen3) and Phoenix (Zen4) APUs against a Jasper Lake NUC11 from Intel with a Pentium Silver N6005 Jasper Lake Atom.

All three are configured for 15 W sustained PL1 and 25 Watts of PL2 from the factory and (hopefully!) unlike those boards, cannot be changed.

(I have Alder Lakes with E-cores, but not as Atoms and unfortunately you can only disable the E-cores, not the P-cores. There are software tools to run benchmarks on E-cores only, but that only gives you performance not energy figures.)

I'm running them all on Windows 11 24H2 Enterprise IoT LTSC with the newest patches on a balanced profile.

I measure wall power via a digital power metering device, not exactly a lab technician's tool, something I bought for less than €100 on Amazon and has its display on a break-out cable away from the socket for easier observation: It doesn't offer quantization intervals, really just gives a single reading.

And then I am using HWinfo for detail analysis using board sensors, which are often just using extrapolated vendor data to provide estimates, not measurements. Here I am graphing SoC power use, to get better long-term readings. I am also using its remote display ability to catch screen-off values on another system. With that Hwinfo might also consume a bit of power all on its own.

I was trying to collect three main values:
1. Minimum idle power in Watts, basically a Windows desktop with nothing really running, except HWinfo.
2. Peak power consumption under sustained application load in Watts, running the CPU benchmark
3. Relative CPU benchmark performance

When it comes to CPU benchmarks, there are obviously tons of them. One I've come to really like, because it's easy to use and relatively meaningful is Justin's WASM Web benchmark.

It uses your browser (any modern Firefox or Chrome, probably also Safari will do) to run a web assembly program which in turn runs a relatively simple POV-ray style render of a single scene as workload. It reports a single number, the number of millions of calls per second to the core rendering function the cpu can achieve. By default it will use all threads on your CPU, but you can have it use extra threads e.g. if your browser fails to detect them properly.

It's main advantage is near universal availability under any OS and ISA and very good scalability across dozens of cores. It also generally runs long enough to heat-soak mobile setups and transition into PL1 or sustainable load, yet won't try your patience. In short, it won't replace SPEC, Geekbench, Prime95 or Cinebanch, but it's quite handy and easy to use. And there is a good mix of results available to compare your system against various others. I've run it on pretty near every kit I own, from Raspberry and Orange PIs to my 7950X3D.

Windows idle is a bit of a challenge, as TH editors will attest, because Microsoft believes it's free to do whatever it wants on your PC, if you don't use it. So right after the screen turns off, its various agents will kick into action. They settle down eventually...

Anyhow with that I got

Geekbench v6 singleGeenbench v6 multiWatts at outlet screen offWatts at outlet screen onHWinfo CPU idle screen onWatts with WASM at outletWatts WASM hwinfo CPUWASM score
Ryzen 5800U187962973.5-57.5-92.625-2115-1252.95
Ryzen 7435HS201798685forgotsorry261353.57
Pentium N600556516617.510-114.529-2617-1518.26

Notes:
  1. I evidently forgot some measurements on the Thinkpad...
  2. The external powermeter fluctuated between 3.5 and 5 on the idle Zen3 notebook, no idea why
  3. Where there is a range on WASM benchmarks, clocks dropped a bit because of heat soak, all systems have fans
  4. The Ryzens are full notebooks while the Atom is a NUC with an external display
With that I'd say that those U-class APUs are very much in a similar range to at least the Jasper Lake Atom in terms of power usage at the wall plug. But the Atom delivers about 1/3 of the computing performance at that power.

I'd love to have N100/300/350 data to compare!

Current personal conclusion:

When those Ryzens were in the €500-800 range and often couldn't even be had as Mini-ITX boards, the Atoms had their place in the range of options, because they could be had at around €100, at least initially (J1900), prices went a little higher and systems became rather scarce with Covid-19 or post Goldmont.

I used 3-4 passive N5005 based Mini-ITX boards because I wanted to run them as 24x7 virtualization clusters in my home-lab, where most of the testing is functional not performance, heat and noise were a bigger concern. they originally ran oVirt and Gluster, now Proxmox and Ceph. They needed lots of RAM, because they were running virtual machines, sometimes even nested and the N5005 supported up to 32GB with two SO-DIMM sockts, even if Intel didn't want you to use more than 8GB. The Jasper Lake NUC even worked with 64GB of DDR4 in two SO-DIMMs, which was crazy in terms of RAM prices, but useful for VM testing.

These systems are becoming a little long in the tooth, and I've been itching to upgrade them to something more modern and faster, buth without letting them get noisy and hot.

Because the form factor is identical and the N5005 and the Zen3 APUs both use DDR4, I can re-use the 32GB DDR4-2400 from the Atoms, chassis and everything else: just a mainboard swap for now, and a lot of reusability potential once I no longer need that cluster.

At less than €200 (with coupons, but including a fan), for the full 5825U board with 8 SATA ports, one NVMe v3 x4, dual 2.5 Gbit and an x8 PCIe v3 port left over for dual 10Gbit, a dGPU, hardware RAID or whatnot, it seems great value... if you have DDR4 left over or can get it cheap.

I'll be sure to let you know if it turns bad or just worse than I thought.

Of course this is a fire-sales price, most likely not a price point AMD would want to pursue long term with Strix Point or other APUs coming down the road.

But as vendors need to push the limits and come up with shiny new things to buy, older stuff that is really rather competent for the use case you may have in mind will continue to be left over and Chinese manufacturing powerhouses are very good at using these faded beauties to come up with very attractive offers.

Look out and enjoy!
 
Last edited:
  • Like
Reactions: thestryker
8505 ITX board with 1x NVMe PCIe 4.0 x4, 5x NVME PCIe 3.0 x1, 6x SATA, PCIe 4.0 x4 slot, 2x 2.5Gb 226-V with breakout cables for SATA and board for NVMe as well as CPU cooler for ~$300 including shipping (excluding discounts).

Plenty of Intel options have had 10Gb built in both in ADL-N and ADL/RPL form, but tariffs did a number on stock so much like the above 8505 board things are out of stock. It seems like the vast majority of ITX boards I'd looked at previously are just gone now except for the N100/N150 ones.
Looks like they have run out of 8505 CPUs, but with coupons this i5 variant might come into economic range.

I'm pretty sure it will want more power, though, and it might not tolerate throttling PL1/PL2 settings too much.

At least that's been my experience with a G660 with an i7-12700H from Erying: great performance but at much more power than AMD's equivalents.
 
  • Like
Reactions: thestryker
I'd love to have N100/300/350 data to compare!
https://www.servethehome.com/fanless-intel-n100-firewall-and-virtualization-appliance-review/3/

STH has done a ton of Tremont/GoldmontGracemont reviews of these fanless boxes. They tend to be configured above the TDP they're supposed to be from my experience (I have N5105 and N305 versions) so they're not really comparable to other types of devices but they compare to each other reasonably.
Looks like they have run out of 8505 CPUs, but with coupons this i5 variant might come into economic range.

I'm pretty sure it will want more power, though, and it might not tolerate throttling PL1/PL2 settings too much.
They location gate and there isn't a US version of that one so the link just comes up blank for me, but I'm assuming that's an i5-12450H version. It'll absolutely use more power under load as it's a 45W part and official minimum is 35W so I doubt it would be unlocked much lower. It's weird that those were the two chips they chose for those boards because the 8505 is a 15W low power mobile part and the i5 is a performance mobile part.
At less than €200 (with coupons, but including a fan), for the full 5825U board with 8 SATA ports, one NVMe v3 x4, dual 2.5 Gbit and an x8 PCIe v3 port left over for dual 10Gbit, a dGPU, hardware RAID or whatnot, it seems great value... if you have DDR4 left over or can get it cheap.
I've seen that board and it definitely seems like a good deal.
 
Last edited:
https://www.servethehome.com/fanless-intel-n100-firewall-and-virtualization-appliance-review/3/

STH has done a ton of Tremont/Goldmont reviews of these fanless boxes. They tend to be configured above the TDP they're supposed to be from my experience (I have N5105 and N305 versions) so they're not really comparable to other types of devices but they compare to each other reasonably.
STH has been on my daily reads for ages :)

If you have the N305, would you mind running the WASM bench? Perhaps you could run it with X-forwarding if your N305 is Linux/BSD and doesn't have a display connected, one of the reasons I like it so much.
They location gate and there isn't a US version of that one so the link just comes up blank for me, but I'm assuming that's an i5-12450H version. It'll absolutely use more power under load as it's a 45W part and official minimum is 35W so I doubt it would be unlocked much lower. It's weird that those were the two chips they chose for those boards because the 8505 is a 15W low power mobile part and the i5 is a performance mobile part.
Yup, it's an H-class i5H.

My Erying G660 has the i7-12700H with configurable PL1/2/TAU, and I might even be able to configure its P-cores to fewer than 6, even if you can't shut them off entirely...

Perhaps I'll try to see how low it can go, but since it needs a CMOS clear when you overdo things, I'll probably wait until it's time to go physical again anyway, been doing way too much liquid metal experimentation on that one...

* * *
Intel couldn't fit more than 2 P-cores into 15 Watts of TDP. Don't know if they could even run at ~1Watt each (physically impossible) or if they were just slower than E-cores at that point (economic nonsense): the CMOS-knee curve was very different between the two.

Incredibly, the Zen 3/4/5 cores work just great at between 1 Watt (8-core APUs with 15 Watts of TDP clocks at 2GHz with 1 Watt each) to the end of the desktop range. I'm sure the CMOS-curve still isn't linear, but it didn't fail outright, like Intel's P-cores did.

I don't know if Intel just refuses to sell SoCs with P-cores at fire-sale prices or if they truly ran out of stock. Some Tiger Lakes are still floating around, but to dip below €200 for a populated board you have to go to Gen 7 or even earlier, quad cores at the most.

And those really dislike going below 15 Watts peak, even with only 4 cores, while they lack the IPC improvements that came with Ice and all the other Lakes.

I have three Intel NUCs, one each of G8/G10/G11 running in another cluster. They are fine between 15 and 45 Watts, but typically failed badly when you tried to fit them into something passive, which means less than 10 Watts sustained.

Not even Lunar Lake seems to allow meaningful ~1 Watt operation for P-cores, instead they had to make better E-cores, which unfortunately they won't sell as Atoms in this €200 price range for an 8-core board.

Sadly not even ARM vendors design for 8-core systems in the P-core performance range near €200 for a full board, somehow that gap only gets filled with left-overs, which is ok, when there is supply. But I remember when there wasn't.
 
  • Like
Reactions: thestryker
Intel couldn't fit more than 2 P-cores into 15 Watts of TDP. Don't know if they could even run at ~1Watt each (physically impossible) or if they were just slower than E-cores at that point (economic nonsense)
It's the latter because Gracemont scales better than Golden Cove at low wattage. I imagine part with just 4 P-cores could have been okay as a 15W SKU, but it would have run worse MT and the benefit of 4 P-cores instead of 2 would have been non-existent for most people.
If you have the N305, would you mind running the WASM bench? Perhaps you could run it with X-forwarding if your N305 is Linux/BSD and doesn't have a display connected, one of the reasons I like it so much.
It's my router and is running native pfSense so running benchmarks isn't really on the table. The power testing and whatnot I did was before I switched over to using it (was testing to see how much more power a P1600X consumed than a P41 Plus) and I haven't touched it since.
 
  • Like
Reactions: abufrejoval
But I have two Lenovo notebooks, one Slim 7 13ACN05 with a Ryzen 7 5800U and a Thinkpad X13G4 with a Ryzen 7 PRO 7840U to represent Cezanne (Zen3) and Phoenix (Zen4) APUs against a Jasper Lake NUC11 from Intel with a Pentium Silver N6005 Jasper Lake Atom.
As I explained, that might be an interesting comparison, but not very relevant for Alder Lake-N.

And then I am using HWinfo for detail analysis using board sensors,
In Linux, you can use turbostat to read package power and sensors to read CPU temperature sensors. My Zen 3 CPU (5800X) provides a power stat for the CCD and Package power, but sometimes it claims the CCD is greater than package power, which suggests at least one of the numbers is more like an estimate than a measurement. So, they're probably more useful in relative measurements than absolute ones.

Anyhow with that I got
Geekbench Multi is BS. I suggest not promoting the idea that it's any kind of useful metric. It's a confused mishmash of benchmarks that mostly don't scale well with core count. Most people quite reasonably assume that it scales well with core count, as is the case with most multithreaded benchmarks, but it was specifically designed not to.

I'd love to have N100/300/350 data to compare!
I will not run GB6 Multi and I only have Linux, but if you name the TDP, I can run what you like on my N97. I can even kneecap it to simulate a N100.

Which browser did you use, for WASM?

sometimes even nested and the N5005 supported up to 32GB with two SO-DIMM sockts,
My N97 currently has a 32 GB DDR5 SO-DIMM. Hardkernel has qualified their ODROID H4 series with 48 GB. Presumably, 64 GB will come soon.

Because the form factor is identical and the N5005 and the Zen3 APUs both use DDR4, I can re-use the 32GB DDR4-2400 from the Atoms, chassis and everything else: just a mainboard swap for now, and a lot of reusability potential once I no longer need that cluster.
You can't do ECC memory. My N97 can (although it's only in-band ECC).

For me, no ECC would a deal-breaker for what I'm using it for, which is as a fileserver for my most frequently-accessed data. This way, I don't need to keep my big fileserver (with disk-based storage) turned on, and can just use it for doing backups and holding less frequently-accessed media files.

Note that enabling IB-ECC on Alder Lake-N must be done in BIOS, and only a few boards enable this option. Hardkernel's ODROID-H4 is one such example. Don't blindly assume an Alder Lake-N board supports IB-ECC.
 
Last edited:
  • Like
Reactions: thestryker
Perhaps I'll try to see how low it can go, but since it needs a CMOS clear when you overdo things, I'll probably wait until it's time to go physical again anyway, been doing way too much liquid metal experimentation on that one...

* * *
Intel couldn't fit more than 2 P-cores into 15 Watts of TDP. Don't know if they could even run at ~1Watt each (physically impossible) or if they were just slower than E-cores at that point (economic nonsense): the CMOS-knee curve was very different between the two.

Incredibly, the Zen 3/4/5 cores work just great at between 1 Watt (8-core APUs with 15 Watts of TDP clocks at 2GHz with 1 Watt each) to the end of the desktop range. I'm sure the CMOS-curve still isn't linear, but it didn't fail outright, like Intel's P-cores did.
My experience with setting PL1 and PL2 on the Alder Lake-N is that the CPU would try to stay within the target setting, but it only stepped the cores in 400 MHz increments and it would never run them below 400 MHz. Even if I set PL1=PL2=1W, they'd all run at 400 MHz. And if it used 2.5W, it used 2.5W.

Also, kinda funny to see them run at 400 MHz, when they normally idled at 800 MHz!

Sadly not even ARM vendors design for 8-core systems in the P-core performance range near €200 for a full board, somehow that gap only gets filled with left-overs, which is ok, when there is supply. But I remember when there wasn't.
The lowest-priced (8 GB) version of this is $225, which is pretty good, if you're comparing it to boards that don't include DRAM:


It has 8 medium cores and 4 little ones. Four of the medium cores are more medium than the others.
; )
 
As I explained, that might be an interesting comparison, but not very relevant for Alder Lake-N.


In Linux, you can use turbostat to read package power and sensors to read CPU temperature sensors. My Zen 3 CPU (5800X) provides a power stat for the CCD and Package power, but sometimes it claims the CCD is greater than package power, which suggests at least one of the numbers is more like an estimate than a measurement. So, they're probably more useful in relative measurements than absolute ones.
Main problem is with the granularity of the sensors on Intel. On Alder Lake you just can't get the power consumption separated between E-cores and P-cores and since you can't deactivate P-cores completely, I couldn't simulate an N305, even if I had a superset of the harware via an i7-12700H.

On Zen the power stats are way more detailed, as you can see from HWinfo, the normal Linux instrumentation is far behind but AMD offers libraries to get way more data, also for code profiling.

The company project for which I was investigating that was shut down, and privately I'm not quiet motivated enough to try replicating the full range of HWinfo data in Linux.
Geekbench Multi is BS. I suggest not promoting the idea that it's any kind of useful metric. It's a confused mishmash of benchmarks that mostly don't scale well with core count. Most people quite reasonably assume that it scales well with core count, as is the case with most multithreaded benchmarks, but it was specifically designed not to.
I'd differentiate between v6 and v5. V5 (and v4) was pretty ok when it came to multi-core scaling, I don't know what they did in v6, but it doesn't hold water as far as I am concerned: I know how my big machines scale on CPU workloads and v6 doesn't reflect that.
I will not run GB6 Multi and I only have Linux, but if you name the TDP, I can run what you like on my N97. I can even kneecap it to simulate a N100.

Which browser did you use, for WASM?
I've been a Firefox guy since the day it came out and run Brave or Chromium when I want a comparison. Both aren't really far apart when it comes to WASM, which is how it should be: the browser will only run the compiler/verifier the end result should be nearly identical for both.

So if you could add your N97 number, that should be nice.

BTW N97 vs N6005 iGPU shouldn't be that far apart, it's 32EU at 900MHz for the N6005 vs. 24EU at 1,2GHz for the N97. It also does HDR but won't go beyond 60Hz at 4k, which is okay.

But the H-class Xe on my i7-12700H doesn't exactly blow those AMD APUs out of the water, neither the older GCN on Cezanne, nor the newer RDNA3 on Phoenix. They are all ultimately limited by DRAM bandwidth and there both the P-core Alder-Lakes and the AMD APUs deliver double of what the Atoms manage.

My personal benchmark for a reasonable non-gaming PC is running Google Maps 3D on a 4k screen. If I can rotate/zoom/slide through the 3D render there without stutter (apart from the ones caused by network data loads), it gets an approved for all desktop operations.

The Orange PI 5+ manages that as well as the N6005, the Raspberry PI 5 and the N5005 struggle a bit, older Atoms like the N3700 or J1900 couldn't go near it even at THD.
My N97 currently has a 32 GB DDR5 SO-DIMM. Hardkernel has qualified their ODROID H4 series with 48 GB. Presumably, 64 GB will come soon.
I suggest being carefuly with that. I've read that 64GB modules will double ranks and thus often have RAM timings fall back to what you get with four sticks of DDR5 on Zen: DDR5-4800, if you're lucky.

I had to cut back to two 48GB DDR5-5600 ECC modules for my Ryzen 7950X3D, when I was using four 32GB DDR4-3200 ECC for Zen 3 without issues.
You can't do ECC memory. My N97 can (although it's only in-band ECC).

For me, no ECC would a deal-breaker for what I'm using it for, which is as a fileserver for my most frequently-accessed data. This way, I don't need to keep my big fileserver (with disk-based storage) turned on, and can just use it for doing backups and holding less frequently-accessed media files.
I used to think the same and I still prefer using ECC when I can get it.

Unfortunately it's one of those things where AMD APUs seem to have gone of the path of virtue.

My primary 24x7 file servers remain on ECC, the backup (7945HX) has to do without, because it doesn't support it in the BIOS (pretty sure the IOD is identical to the 7950X).

All my home-lab virtualization clusters had to make do without ECC, but in five years of 24x7 operation at least I've never observed any data corruption.

But then they don't manage files and whatever data flows through them doesn't stay in RAM a lot and thus much of a chance to grow stale.
Note that enabling IB-ECC on Alder Lake-N must be done in BIOS, and only a few boards enable this option. Hardkernel's ODROID-H4 is one such example. Don't blindly assume an Alder Lake-N board supports IB-ECC.
I know. Intel seems to make that an expensive extra option and I'm not even sure that Hardkernel paid for that...

Those Aliexpress board also sometimes expose ECC settings in the BIOS on hardware which doesn't officially support that. Perhaps because beta or evaluation BIOS variants open up everything.

Testing for that to work correctly unfortunately isn't trivial, you'd need the ability to do error insertion and track how ECC kicked in to fix it.

They don't sell these tools to consumers...
 
The lowest-priced (8 GB) version of this is $225, which is pretty good, if you're comparing it to boards that don't include DRAM:

It has 8 medium cores and 4 little ones. Four of the medium cores are more medium than the others.
; )
I keep tracking that, but reports on software support quality are not encouraging, worse than Orange PI, which only Armbian keeps running well.

And with that Zen APU selling below €200 but a lot more flexiblity and the best software support in the world, it's hard to justify, not even with lower energy consumption, because those ARMs are fabbed on ancient nodes, not like your leading-edge phone chips.

Qualcom simply isn't delivering dev systems, Nvidia's Digit is still far away and probably to pricey...
 
The company project for which I was investigating that was shut down, and privately I'm not quiet motivated enough to try replicating the full range of HWinfo data in Linux.
Probably the kernel stats are there and just need to be hooked up to turbostat. That said, I can't blame you for not doing it, since it's not like I'm about to.

I've been a Firefox guy since the day it came out and run Brave or Chromium when I want a comparison. Both aren't really far apart when it comes to WASM, which is how it should be: the browser will only run the compiler/verifier the end result should be nearly identical for both.
Unless they both happen to have the same compiler, I think there's certainly room for performance differences. Even though it's called "asm", it still uses an abstract machine model and virtual registers. It's not really that much lower-level than C which, after all, was designed to be a portable substitute for assembly language. Therefore, plenty of room exists for WASM backends to do optimizations.

So if you could add your N97 number, that should be nice.
Sure, I'll see what I can do.

BTW N97 vs N6005 iGPU shouldn't be that far apart, it's 32EU at 900MHz for the N6005 vs. 24EU at 1,2GHz for the N97. It also does HDR but won't go beyond 60Hz at 4k, which is okay.
The N6005 has Gen11, whereas Alder Lake-N has a second gen Xe iGPU. Xe includes AV1 decode hardware, Gen11 doesn't.

My personal benchmark for a reasonable non-gaming PC is running Google Maps 3D on a 4k screen. If I can rotate/zoom/slide through the 3D render there without stutter (apart from the ones caused by network data loads), it gets an approved for all desktop operations.
...because that's important for your job, or why exactly?

I'm just asking, because I used a Sandybridge iGPU on a 1440p screen for many years and it was totally fine for everything non-3D I wanted to do. And that's like stone age tech, compared to modern iGPUs and memory bandwidth.

The Orange PI 5+ manages that as well as the N6005, the Raspberry PI 5 and the N5005 struggle a bit, older Atoms like the N3700 or J1900 couldn't go near it even at THD.
I think it depends on the settings you use. My N97 is disproportionately faster on my 1440p screen than my i5-1250P is on a 4k screen, in spite of having 4x the shaders and double the memory bandwidth. Probably a more significant difference is the OS and graphics backend. The N97 is using Linux and either OpenGL or Vulkan, whereas the i5 is my work machine and runs Windows, therefore most likely using a DX12 backend with different features and such.

I suggest being carefuly with that. I've read that 64GB modules will double ranks and thus often have RAM timings fall back to what you get with four sticks of DDR5 on Zen: DDR5-4800, if you're lucky.
Nonsense. My 32GB DIMM is already dual-rank and these CPUs don't support quad-rank. In fact, I'm quite sure quad-rank DDR5 UDIMMs aren't even a thing. It will use JEDEC timings, just like my 32 GB DIMM does, because these SoCs don't support XMP or overclocking.

What enables 64 GB DIMMs is the density of the available DRAM chips. When DDR5 launched, the only thing on the market were 16 Gb chips. That meant you could get 16 GB DIMMs with 8 chips (i.e. single-rank DIMMs) or 32 GB DIMMs with 16 chips (i.e. dual-rank DIMMs). Next came 24 Gb chips, leading to either 24 GB or 48 GB DIMMs. Now, 32 Gb chips are starting to hit the market.

Unfortunately it's one of those things where AMD APUs seem to have gone of the path of virtue.
You mean "value"? IMO, the path of virtue should mean they do support ECC, which they don't.

the backup (7945HX) has to do without, because it doesn't support it in the BIOS (pretty sure the IOD is identical to the 7950X).
Well, the Pro APUs can do it. So, it's not a hardware limitation, but rather artificial market segmentation.

All my home-lab virtualization clusters had to make do without ECC, but in five years of 24x7 operation at least I've never observed any data corruption.
DRAM degrades with age/use. The longer it stays in service, the greater the likelihood. And by the time you actually notice, it could be way too late. If you value your data, the least you could do is periodically run at least two passes of memtest.

At my job, I've seen lots of instances of machines that have been in service, for several years, developing instability (i.e. if they're not using ECC DRAM) or starting to log ECC errors (i.e. for those which do have ECC DRAM). We even had one server, that was in service for about a decade, with 3 out of 4 DIMMs reporting ECC errors and one of them even reporting multi-bit (i.e. uncorrectible) errors! Since it was not a machine being subject to regular monitoring, the only way it came to my attention was due to instability. Apps which triggered a multi-bit ECC error would fail with SIGBUS.

whatever data flows through them doesn't stay in RAM a lot and thus much of a chance to grow stale.
Oh, that's not required. One machine that developed bad DRAM had a crashing program that segfaulted when a certain ephemeral data structure was used to index into another data structure. The timing was milliseconds or less, between the time it got computed vs. when it got used.

I know. Intel seems to make that an expensive extra option and I'm not even sure that Hardkernel paid for that...
They enabled it, but the user community had to ask for it. They were simply unaware of this capability, but subsequently enabled it without any reluctance, other than trying to figure out how to test it. They gave no indication that it cost them anything.

Testing for that to work correctly unfortunately isn't trivial, you'd need the ability to do error insertion and track how ECC kicked in to fix it.

They don't sell these tools to consumers...
A simple test that some people do is to overclock their memory until they see ECC errors getting logged. Unfortunately, Alder Lake-N doesn't support overclocking. Another option is to use a known-bad DIMM.

I think I've seen the PassMark version of Memtest86 mention ECC error insertion, but not all chipsets are supported. My version is a little out-of-date.
 
I keep tracking that, but reports on software support quality are not encouraging, worse than Orange PI, which only Armbian keeps running well.
Yeah, concerns about software support is why I didn't buy one. It sure sounded like the SoC and/or board vendor were serious about upstreaming software support for it, but I haven't tracked how that's played out. Radxa has an English-language user forum, but I didn't visit there since before tariffs kicked in, which is the only reason I would've been in any hurry to buy one.

In the end, I decided to stick with a Raspberry Pi 5, because I knew its software support would be rock solid. I used to want an Orange Pi 5+, but the efficiency data I've seen suggests it's really not a lot better than the Raspberry Pi 5. In theory, its GPU is a lot better, but I trust R.Pi 5's to be better supported.

And with that Zen APU selling below €200 but a lot more flexiblity and the best software support in the world,
That's why I like Alder Lake-N. I run a mainstream distro and everything just works. Firefox even supports HW accel of most video codecs, out of the box!

it's hard to justify, not even with lower energy consumption, because those ARMs are fabbed on ancient nodes, not like your leading-edge phone chips.
Rockchip RK3588 used Samsung 8 nm? I think Raspberry Pi 5 used a 14nm-class node. However, the Cix CD8180 SoC on that Radxa Orion O6 is made on TSMC N6!

Qualcom simply isn't delivering dev systems, Nvidia's Digit is still far away and probably to pricey...
Mediatek Genio is another one to watch. The 1200 has 4xA78 + 4xA55 (not sure what process node, but you can bet at least 7 nm-class):


More recently, there are some cheaper boards launched with the Genio 720 and 520. CNX Software is probably the best place to discover these.
 
Yeah, concerns about software support is why I didn't buy one. It sure sounded like the SoC and/or board vendor were serious about upstreaming software support for it, but I haven't tracked how that's played out. Radxa has an English-language user forum, but I didn't visit there since before tariffs kicked in, which is the only reason I would've been in any hurry to buy one.

In the end, I decided to stick with a Raspberry Pi 5, because I knew its software support would be rock solid. I used to want an Orange Pi 5+, but the efficiency data I've seen suggests it's really not a lot better than the Raspberry Pi 5. In theory, its GPU is a lot better, but I trust R.Pi 5's to be better supported.
RP5 and OP5+ are pretty near identical in terms of CPU performance. The RP5 allows overclocking while the OP5 doesn't, and with the extra 4 A55 cores the OP5+ without overclock is just about identical to the RP5 with overclock e..g on the WASM benchmark. For general use those extra A55 cores are just unnecessary complexity, the typical motive for BIG little in ARM is all about energy savings, which the OP5+ doesn't target, in the industrial embedded space they might have some use, at least they don't seem to get in the way.

The main differences are: OP5+ goes up to 32GB of RAM (which I got for a total of around €200), GPU is pretty near the N6005 and passes the same Google 3D test at 4k, while the RP5 definitely falls short there.

There is an NPU, which I don't care about and which lacks support anyway, but there is plenty of difference in I/O vs the RP5.

One full PCIe v3 x4 (M.2), which I use for a Samsung 970 Pro 1TB that got kicked out somewhere else, and a PCIe v3 x1 (M.2), which holds an Intel AX200 WIFI on my board. The dual built-in 2.5 GBit are nothing to complain about, either.

On the RP5 I use a USB3 adapter for 2.5 Gbit Ethernet while the other holds one of those super fast Kingston USB3 drives, which can go up to 1GByte/s on a 10Gbit USB port. It's around 450MByte/s on the RP5, about the best the platform is capable of.

I run Proxmox and KDE on both, because I can, but the OP5+ certainly gives more headroom.
That's why I like Alder Lake-N. I run a mainstream distro and everything just works. Firefox even supports HW accel of most video codecs, out of the box!
OpenGL support on th OP5+ comes via Armbian if you want Debian (for Proxmox), their Ubuntu build has it, too.

4k Youtube doesn't quite work on the RP5, it's fine on the OP5+. Little doubt Atoms do better there.
Rockchip RK3588 used Samsung 8 nm? I think Raspberry Pi 5 used a 14nm-class node. However, the Cix CD8180 SoC on that Radxa Orion O6 is made on TSMC N6!
Well, neither the RP5 nor the OP5 are really made to be fantastically good at idle power. They won't go very high at peak usage, but they also won't go as low as an equivalent phone SoC. At their heart they are embedded server designs and supposed to work 24x7. I doubt they'll beat your N97 on idle and are no fruity cult Mx chips, either.
Mediatek Genio is another one to watch. The 1200 has 4xA78 + 4xA55 (not sure what process node, but you can bet at least 7 nm-class):

More recently, there are some cheaper boards launched with the Genio 720 and 520. CNX Software is probably the best place to discover these.
cnx-software isn't a daily read, but definitely has been a once a week for many years :)

My ARMbitions have cooled down, now that I no longer have to prove how much data center power we'd save by going ARM.

I'm happy to just stick with x86 until someone makes me a truly unresistable (2x or better) offer in either price, or performance, or energy consumption at similar usability and flexibility.
 
Unless they both happen to have the same compiler, I think there's certainly room for performance differences. Even though it's called "asm", it still uses an abstract machine model and virtual registers. It's not really that much lower-level than C which, after all, was designed to be a portable substitute for assembly language. Therefore, plenty of room exists for WASM backends to do optimizations.
Pretty sure it is the same compiler, inside browsers and everywhere else, because everybody is sharing the same code base.

I'd prefer data points over a fight, so if you could just do both?
The N6005 has Gen11, whereas Alder Lake-N has a second gen Xe iGPU. Xe includes AV1 decode hardware, Gen11 doesn't.
On Tiger Lake Xe was able to pull off enormous improvements vs. previous HD iGPUs. The GT3e iGPU on my i7-8559U with 48EU and 128MB eDRAM on my NUC8 only delivered about 50% improvement over the 24EU iGPU on the NUC10 with a "normal" iGPU on the i7-10710U using twice the resources, and indicating just how difficult it was to speed up the graphics with DRAM bandwidth, even with eDRAM!

Yet the Tiger Lake NUC11 (Tiger Lake and Jasper Lake were NUC11 with Intel...) with 96EU and without any eDRAM help gave linear performance improvements, 4x the NUC10 iGPU, which seemed pretty near impossible!

The solution is something that's been used also for recent dGPUs iterations: a relatively large last level SRAM cache dedicated for iGPU use, inspired by the eDRAM, but both much smaller capacity yet way faster bandwidth and lower latency.

The 10nm process shrink was able to pay for that on the big and expensive Core SoCs.

But I would doubt that the AD Atoms iGPU replicate that, because SRAM eats 6-8 very hard to shrink transistors per bit, something Intel might accept in a full SoC with P-cores, but doesn't really fit to an Atom use case, which is all built around small cheap dies. With single channel DRAM, only 24 vs 96 EUs they might have gone easy on the SRAM.

Media encoding IP blocks aren't big by today's standards and shrink well. They also ultra important for the critical electronic signage use cases, were Atoms are popular.

I'm not going to fight over Jasper Lake vs. AD Atom GPU performance, I'll just say that both the GCN and the RDNA3 AMD APUs are in Tiger Lake 96EU territory.

...because that's important for your job, or why exactly?
Mostly personal, but somewhat reaonable, I'd say.

My principal desktop setup is two 42" 4k screens, with a cascade of KVMs to connect everything not mobile. Switching resolutions as well as screens creates a mess far worse than just switching screens, so 4k at 60Hz is a strong baseline, 144Hz and HDR what I use on the main VR panel, 60Hz without HDR on the secondary IPS, which I use mostly when I measure or want to monitor a secondary system. I'd say it's a single monitor setup with an easy option.

If I game, I game on gaming rigs, got plenty of those, so I don't have to squeeze that into every notebook or NUC. But a competent 4k with a 60Hz minimum includes video at the same resolution, fluid scrolling across large PDFs and web sites and just enough 3D to satisfy the info-worker.

And Google's 3D maps are about the best optimized 3D graphics presentation there is. It is so thoroughly optimized there is just no excuse not to do it, because it runs really well even at 4k on something as lowly as the Orange PI5 and a Jasper Lake Atom. Certainly every mobile phone from the last few years could do it, even mid-range ones, if only because it even runs on my Jetson Nano and that's a crippled Switch 1.

At THD or 1440 I'm pretty sure it even runs on a Sandy Bridge HD3000.

And then I just use it a lot, because I manage things spatially in my head.
I'm just asking, because I used a Sandybridge iGPU on a 1440p screen for many years and it was totally fine for everything non-3D I wanted to do. And that's like stone age tech, compared to modern iGPUs and memory bandwidth.

I think it depends on the settings you use. My N97 is disproportionately faster on my 1440p screen than my i5-1250P is on a 4k screen, in spite of having 4x the shaders and double the memory bandwidth. Probably a more significant difference is the OS and graphics backend. The N97 is using Linux and either OpenGL or Vulkan, whereas the i5 is my work machine and runs Windows, therefore most likely using a DX12 backend with different features and such.
The jump from 1440 to 4k more than doubles the pixels. It's a cut off point for quite a lot of graphics hardware, which does pretty well with the former, yet fails with the latter: there is a reason 1440p is one of the most popular gaming resolutions today.

If you ran both on the same screen, the i5 should do a lot better, not even Windows could slow it down.

But then the overhead of corporate watchdog software is hard to overestimate, I still have one of those as well and it's not only a dog, but still a 10th gen quad core.
Nonsense. My 32GB DIMM is already dual-rank and these CPUs don't support quad-rank. In fact, I'm quite sure quad-rank DDR5 UDIMMs aren't even a thing. It will use JEDEC timings, just like my 32 GB DIMM does, because these SoCs don't support XMP or overclocking.
I would have said the same, especially since quad rank isn't even officially supported by Zen.

But I believe this came from Wendel, who's pushing for 256GB on the desktop, so whatever the issue, I'd advise caution and testing.
What enables 64 GB DIMMs is the density of the available DRAM chips. When DDR5 launched, the only thing on the market were 16 Gb chips. That meant you could get 16 GB DIMMs with 8 chips (i.e. single-rank DIMMs) or 32 GB DIMMs with 16 chips (i.e. dual-rank DIMMs). Next came 24 Gb chips, leading to either 24 GB or 48 GB DIMMs. Now, 32 Gb chips are starting to hit the market.
Let's test and see, I never mind extra RAM.
You mean "value"? IMO, the path of virtue should mean they do support ECC, which they don't.
Missing an "f" there: I meant to say that AMD went off the path of virtue
Well, the Pro APUs can do it. So, it's not a hardware limitation, but rather artificial market segmentation.
I've not been able to find out for a long time. These APUs come in different BGA form factors and there seems to be a correlation between that and ECC support. And of course the mainboard needs to support the traces from the SO-DIMM sockets, which might not be there etc. when that saves half a cent.

And yes, I've hated it for the marget segmentation nature, when AMD was very loudly advertising how they didn't do that with the original Zen launch.

Same with per-VM encryption and other potentially interesting things present in the harware but only available for Pro and EPYC customers.

And of course my Thinkpad has a Pro APU, but soldered RAM without the extra chips for parity...
DRAM degrades with age/use. The longer it stays in service, the greater the likelihood. And by the time you actually notice, it could be way too late. If you value your data, the least you could do is periodically run at least two passes of memtest.
That's been very much impressed on me by my former Bull colleagues who developed their HPC harware, too.
At my job, I've seen lots of instances of machines that have been in service, for several years, developing instability (i.e. if they're not using ECC DRAM) or starting to log ECC errors (i.e. for those which do have ECC DRAM). We even had one server, that was in service for about a decade, with 3 out of 4 DIMMs reporting ECC errors and one of them even reporting multi-bit (i.e. uncorrectible) errors! Since it was not a machine being subject to regular monitoring, the only way it came to my attention was due to instability. Apps which triggered a multi-bit ECC error would fail with SIGBUS.
I know, I know and I try to contain the risk, and live with what's left.
But I am also driving way too fast, when the Autobahn is empty (150mph), Good thing it's rare...
They enabled it, but the user community had to ask for it. They were simply unaware of this capability, but subsequently enabled it without any reluctance, other than trying to figure out how to test it. They gave no indication that it cost them anything.
I've seen some BIOSes so bad, I'm pretty sure they were obtained without a proper support contract...
And the Intel we know and love doesn't give anything away for free, that they can charge money for.
Who can you trust?
A simple test that some people do is to overclock their memory until they see ECC errors getting logged. Unfortunately, Alder Lake-N doesn't support overclocking. Another option is to use a known-bad DIMM.
There is ways to test and validate that without maintaining a faulty DIMM, ECC can be switched off and on. It's just that the type of information you need to configure these things aren't available without NDA, which is I guess why we don't see this in open source.

And you could try Row Hammer to see if it triggers ECC. Because the guys who "invented" Row Hammer told me that not even ECC DIMMs protect against modern variants of it, it just takes a little longer...
I think I've seen the PassMark version of Memtest86 mention ECC error insertion, but not all chipsets are supported. My version is a little out-of-date.
That's what led me to say what I said above: if you have the documentation and matching hardware, it can be done in software.

And then you have to trust PassMark to do it properly...
 
One full PCIe v3 x4 (M.2), which I use for a Samsung 970 Pro 1TB that got kicked out somewhere else, and a PCIe v3 x1 (M.2), which holds an Intel AX200 WIFI on my board. The dual built-in 2.5 GBit are nothing to complain about, either.
I definitely had my eye on the RK3588's PCIe connectivity as one of its major advantages. However, the reality is that you'd be hard-pressed to see much practical difference between a NVMe SSD running at PCIe 3.0 x1 vs. x4, on such a weak CPU.

On the RP5 I use a USB3 adapter for 2.5 Gbit Ethernet
I'd have liked 2.5G Ethernet, as pretty much everything else on my network now uses that or 10G, but it's not important enough that I'd bother with the adapter. For me, these machines are mainly useful for testing or benchmarking things on a ARMv8.2 core and maybe a non-Intel/AMD/Nvidia GPU. I don't need to use them heavily, for anything.

BTW, I got 8 GB, after seeing some suggestions of 16 GB incurring a slight performance penalty. There were even some initial performance issues with the 8 GB model, but they got ironed out. I also lucked into getting the lower power SoC, first introduced when the 2 GB model launched.

Well, neither the RP5 nor the OP5 are really made to be fantastically good at idle power. They won't go very high at peak usage, but they also won't go as low as an equivalent phone SoC. At their heart they are embedded server designs and supposed to work 24x7.
No, I disagree. The SoC for the Raspberry Pi was custom-made for it. The Pi is supposed to target embedded applications. The original Pi idled at an impressive 1 to 2W, which is one thing I loved about it. Since then, R.Pi has gotten away from its low-power embedded roots (perhaps, in part, due to their Pico and Zero product lines), which I think is unfortunate.

The RK3588 is most certainly not a server SoC!! It is aimed at tablets and set top boxes! Given its complete lack of any RAS features, I don't understand how you can possibly think otherwise! On the datasheet, the manufacturer lists the target applications as:
  • Cockpit
  • Smart Display
  • AI Camera
  • Edge Computing
  • ARM PC
  • Tablet
  • NVR
  • AR/VR

Even the cores in them are of the mass-market oriented Cortex variety, not the Neoverse variants targeted at servers. Nor are they even the "AE" variants, qualified for self-driving applications!

My ARMbitions have cooled down, now that I no longer have to prove how much data center power we'd save by going ARM.

I'm happy to just stick with x86 until someone makes me a truly unresistable (2x or better) offer in either price, or performance, or energy consumption at similar usability and flexibility.
I'm interested in ARM, because it's currently the other dominant ISA (besides x86). So, I care about its performance characteristics, especially when, where, and how they differ from x86. I wish I had an ARM system with SVE2, and this is why I'm not so interested in the current crop of Snapdragon-based machines. If I'm not going to get SVE2, then I rather go with something cheap & simple, like the R.Pi 5.
 
Last edited:
Eh, aren't there at least Chromium, Firefox and Safari implementations, plus the various standalone WASM execution engines?
I get the impression that Chrome/Chromium and Firefox each implement WASM support atop their respective Javacript engines. Apart from that, some of the dominant implementations seem to be Wasmer and WasmTime.

I found this 2-year-old benchmark comparison of ten different WASM engines, which not only drives home the point of how many implementations are out there, but also how much the implementation can influence runtime performance:


The top 3 seem quite similar. Then, there's another cluster of 3. Then, a cluster of two, and the bottom two performers are each way out on their own.
 
Eh, aren't there at least Chromium, Firefox and Safari implementations, plus the various standalone WASM execution engines?
Again, I'm no authority on that issue, but from what my colleagues in the research lab explained to me, it doesn't make sense that they'd be different.

The browser vendors need to choose their battles and in the case of WASM, it's main target isn't necessarily browsers. It's about a portable binary format that
a) is portable across ISAs
b) is safe to use even if it is near native speed

For a) it can't go too deeply into things like vector (or AI) ISA extensions, which may not exist or be too hard to match across different ISAs and for b) it's all about removing classes of instructions that are likely to have dangerous side effects so that a relatively easy validator can check if side effects could happen, before running the translated binary. It's designed to make things like return-oriented-programming impossible.

And since open source WASM implementations were made available to browser vendors, why should they create their own, if they can't somehow make it go at least twice as fast, so they can claim a competitive edge? For the instructions WASM supports, the resulting code is native speed because it's native instructions: no Java, no JavaScript, no emulation, no way to make them faster.

If you know better, I'd be glad to know but this isn't about Javascript.
 
Last edited:
I definitely had my eye on the RK3588's PCIe connectivity as one of its major advantages. However, the reality is that you'd be hard-pressed to see much practical difference between a NVMe SSD running at PCIe 3.0 x1 vs. x4, on such a weak CPU.
True, I guess, even if the OP5+ does feel a bit snappier than the RP5: could be the 32GB vs 8GB of RAM or the better GPU... It just generally doesn't disappoint as much as a 4k KDE desktop, really on par with the Jasper Lake.

The RP5 always seems to struggle a bit, again with KDE on 4k, which admittedly it wasn't designed for. I believe we talked previously about the DSP roots of its GPU/VPU.
I'd have liked 2.5G Ethernet, as pretty much everything else on my network now uses that or 10G, but it's not important enough that I'd bother with the adapter. For me, these machines are mainly useful for testing or benchmarking things on a ARMv8.2 core and maybe a non-Intel/AMD/Nvidia GPU. I don't need to use them heavily, for anything.
Since my principal experiments were around Proxmox and they also use CEPH storage from x86 nodes, 2.5 Gbit/s are appreciated. And I got lots of these 2.5 USB3 adapters, even more soon, now that 5Gbit USB3.2 variants start to become economical and I'm upgrading some of the beefier laptops, which lack Thunderbolt. On TB machines I use the Aquantia 10Gbase-T.
The RK3588 is most certainly not a server SoC!! It is aimed at tablets and set top boxes! Given its complete lack of any RAS features, I don't understand how you can possibly think otherwise! On the datasheet, the manufacturer lists the target applications as:
  • Cockpit
  • Smart Display
  • AI Camera
  • Edge Computing
  • ARM PC
  • Tablet
  • NVR
  • AR/VR
Alas, wording... What I meant was embedded or edge server, which means always on, always on wall power. And you might add robotics, but I don't see tablet, AR or VR or really anything on small batteries, that's probably marketing overreach.

Whoever tried to build battery powered devices from Raspberries, didn't exactly beat endurance records, nor would they be able to do it with the OP5: it's a connectivity monster and that costs power, even if the CPU cores were modest.

Actually, there is one guy who did build a laptop from an RK3588 for his engineering thesis and it didn't do great on endurance.
I'm interested in ARM, because it's currently the other dominant ISA (besides x86). So, I care about its performance characteristics, especially when, where, and how they differ from x86. I wish I had an ARM system with SVE2, and this is why I'm not so interested in the current crop of Snapdragon-based machines. If I'm not going to get SVE2, then I rather go with something cheap & simple, like the R.Pi 5.
For the hyperscalers the main attraction is the ability to include their own IP blocks for DC fabrics and accelerators in their ARM designs: that's where they beat x86 into a pulp, for some use cases, not all. But they keep those goodies to themselves.

That's why it doesn't seem to translate into similar advantages on the consumer side, apart from the fruity cult, who keeps their stuff proprietary, too.

Until that changes, or someone pulls an even better RISC-V out of his head, may mainstream will stay on x86.
 
Again, I'm no authority on that issue, but from what my colleagues in the research lab explained to me, it doesn't make sense that they'd be different.
This is a deeply flawed argument and ignores each team's belief in their own technology. In particular, the Firefox team tries to write everything in Rust, while Chrome is C++ (last I checked) and would switch to Go if anything. Also, both browsers have their own Javascript engines. Firefox has SpiderMonkey, which explicitly says it also handles WebAssembly.


Chrome uses the V8 engine, which likewise explicitly says it handles WebAssembly:


The browser vendors need to choose their battles and in the case of WASM, it's main target isn't necessarily browsers.
It's right in the name!!! WebAssembly!

Wikipedia even cites a source to support the claim that its primary stated goal is to accelerate scripted content in web pages!

"The main goal of WebAssembly is to facilitate high-performance applications on web pages, but it is also designed to be usable in non-web environments."

Source: https://en.wikipedia.org/wiki/WebAssembly

For a) it can't go too deeply into things like vector (or AI) ISA extensions, which may not exist or be too hard to match across different ISAs
This is exactly the sort of case that a good compiler should help with. Hence, no reason to assume all implementations perform equally.

And since open source WASM implementations were made available to browser vendors, why should they create their own,
Because that's not how the software world works. Every browser has its own infrastructure, environment, security model, JIT engine, and a way for scripts to invoke platform library functionality. The incremental cost of adding a front-end for Web Assembly is small, while the lift of trying to integrate a foreign, 3rd party implementation in your browser is quite large, meanwhile you've now externalized a good deal of your attack surface. Not to mention the whole issue of language wars, that I mentioned above.

if they can't somehow make it go at least twice as fast, so they can claim a competitive edge?
You wouldn't say that about CPUs. GCC and LLVM don't consider anything less than 2x insignificant. The SpiderMokey and V8 teams surely care about smaller than 2x differences in performance.

You just pulled this figure out of nowhere, and I can pretty much guarantee that you wouldn't accept a 2x performance difference in many performance contexts. Like, even your Google Earth benchmark, which you admit isn't based on any fundamental need to use Google Earth as part of your job, but just an arbitrary line in the sand you decided to draw.

For the instructions WASM supports, the resulting code is native speed because it's native instructions: no Java, no JavaScript, no emulation, no way to make them faster.
Absolutely not. It's basically just a modern alternative to Java bytecode.

I think you should stick to supportable facts, and not try to make factual claims that simply align with your values and imagination.