News Insane DDR4 price spikes mean last-gen RAM loses its value luster versus DDR5 — prices have nearly tripled in just two months

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Are they now completely out of the sector?
They're still in it, but they don't have anything datacenter scale. Perhaps they've got something brewing which is part of UEC, but everything I've seen out of their networking business is effectively refinements of existing products. They've been doing node shrinks, adding features in some parts and updating PCIe interfaces. Their current fastest controller is the 830 which is a single port 200Gb PCIe 5.0 part (8 lanes so this could have been 400Gb without over saturation).
LOL, when Realtek is our savior!
Yeah that's the sad truth with client ethernet these days. Not that I have anything against Realtek it's just that they're the only company bothering.
BTW, my Intel X550 can do NBase-T. I'm currently running it in 2.5G mode.
All of Intel's controllers for RJ45 (as far as I'm aware) support full scaling it's just SFP which are 1Gb/10Gb.
I wish 2.5 gig came along like 10 years ago.
While I'd much rather have seen a stripped down client focused 10Gb solution this would have been great. I feel like client ethernet got stuck between enterprise profits and a focus on wifi.
 
  • Like
Reactions: bit_user
How many ports do you need and what LAN speed do you want? If you can deal with 4x SATA ports, 2.5 Gbps ethernet, and one PCIe 3.0 x4 NVMe drive, HardKernel has a N305 board you can use (passively-cooled, even) with a mini-ITX conversion kit.
That's a very good question! 😉

Initially I had planned on all-flash storage, with seldomly accessed stuff on cold-storage HDDs. But earlier this year one of the cold-storage HDDs suddenly had bad sector issues, so I'm reconsidering the plan... perhaps a flash pool and a HDD pool, perhaps a HDD pool with flash cache.

Flash-only were more for latency and lower power consumption than HDDs than throughput, but when even low-end NVMe can saturate a 10Gbit link, it feels a bit meh settling for 2.5Gbit, and 5Gbit seems somewhat oddball (and last time I looked, more expensive than 10Gbit). I might settle for 2.5Gbit though, stuff running locally on the box can still benefit from NVMe bandwidth.

The old system I'm looking to replace has a i5-3550 CPU, and it wall-socket idles at ~35W. I'm currently testing TrueNAS on a NUC, to see if it's viable for the various tasks the old system handles atm (file storage, Postgres for various things including Home Assistant sensor data, Jellyfin, ...) – it performs better at a fraction of the power consumption, but obviously doesn't have enough storage connectivity.
 
Initially I had planned on all-flash storage, with seldomly accessed stuff on cold-storage HDDs. But earlier this year one of the cold-storage HDDs suddenly had bad sector issues, so I'm reconsidering the plan... perhaps a flash pool and a HDD pool, perhaps a HDD pool with flash cache.
I use SSD for hot storage and periodically back up to 4-disk RAID-6, via rsync. For my most sensitive and irreplaceable data, I burn backups on BD-R.

My microserver is Alder Lake-N. That's what holds the 2 TB of hot storage. I access it over CIFS and NFS. I only occasionally power up my backup & bulk data server, which has the hard disks.
 
That's a very good question! 😉

Initially I had planned on all-flash storage, with seldomly accessed stuff on cold-storage HDDs. But earlier this year one of the cold-storage HDDs suddenly had bad sector issues, so I'm reconsidering the plan... perhaps a flash pool and a HDD pool, perhaps a HDD pool with flash cache.

Flash-only were more for latency and lower power consumption than HDDs than throughput, but when even low-end NVMe can saturate a 10Gbit link, it feels a bit meh settling for 2.5Gbit, and 5Gbit seems somewhat oddball (and last time I looked, more expensive than 10Gbit). I might settle for 2.5Gbit though, stuff running locally on the box can still benefit from NVMe bandwidth.

The old system I'm looking to replace has a i5-3550 CPU, and it wall-socket idles at ~35W. I'm currently testing TrueNAS on a NUC, to see if it's viable for the various tasks the old system handles atm (file storage, Postgres for various things including Home Assistant sensor data, Jellyfin, ...) – it performs better at a fraction of the power consumption, but obviously doesn't have enough storage connectivity.
With dozens of terabytes in media data and VMs, all-flash was never really an option for me, and since I've used hardware RAID adapters professionally and privately for a long time, I value their ease of use when it comes to transparent expansion and drive replacement. And then recycled hardware RAID adapters have become dirt cheap and have always been pretty near indestructable. Linux MD RAID, ZFS and CEPH are fine, I use them also, but the most important personal data is still very much used by Windows clients, I've started using ReFS on top with checksums even on the hardware RAID mostly for ZFS like extra integrity and because it doesn't seem to be slower.

It was RAID6 on 2 and 8TB drives for many years, currently with 18TB drives, I've cut down to RAID5, using two sets of 4 drives, one primary and a cold secondary. I keep even colder tertiary and quartary backups of more selective data sets.

For the gamer kids in the home I also keep a shared Steam archive and since that's really just a cache, I use a RAID0 of leftover SATA-SSDs for that, because--unlike NVMe--SATA ports as plentiful or cheap to add. I dislike the write amplification checksumming RAIDs can cause and how that impacts endurance on SSDs, but with a Steam cache I don't need to worry, since it's not critical data. I also keep things like LLM model data there, anything that's just convenient to have on this side of an internet connection that's only 1Gbit.

I also keep hot VMs or other project data on NVMe and then make sure I backup everything I need to the HDD RAID, often across the 10GBit network: those two balance out a little better, while top-notch NVMe is getting too fast for any network.

35Watt at the wall for a full system with disks isn't that bad, even with a CPU at 2-3 Watts on idle it's hard to go much lower with peripherals and storage.

I've come to dislike TrueNAS a bit (their community shoots anyone with Aquantia NICs) and for the longest time they just didn't evolve in terms of clustering, fault tolerance, or VM/container support. I'd been using OpenVZ/Virtuozzo and oVirt/RHV and GlusterFS professionally for so many years and they were stuck on a single physical storage-only box.

It's not changed much, really, even with their Linux migration: storage alone just isn't attractive for your typical home setup, where even the current breed of entry level boxes have 8 potent CPU cores and 64GB of RAM and easily run several VMs under Proxmox on the side, e.g. a pfSense. Univention IAS, NextCloud etc. With pass-through NICs for the firewall, 10Gbit for all East-West traffic and potentially even Southward storage (CEPH) such a box (or a set) can go a long way. And btw. if you're bent on TrueNAS you can run that also on Proxmox and when you pass-through the SATA controller, speed will be like native. I run a Univention file and NextCloud server that way, too, but you need to watch PCIe iogroup granularity (e.g. can't just pass-through a single SATA port).

I'd still recommend Mini-ITX in a shoebox or mini-tower instead of a NUC with external add-ons, because cables get stepped on and break. There used to be a time when you could get mobile-on-desktop only in NUC format, but that's gotten better recently, albeit somtimes only with shipping directly from China.

My experiences with that are good, including returns with full refunds. But I live in the EU where customs is mostly about a three day delay, not economic warfare.
 
recycled hardware RAID adapters have become dirt cheap and have always been pretty near indestructable.
The SAS backplanes on several of our older Dell servers, at my job, all failed. I have a lot of respect for their servers, but not everything is bulletproof.

It was RAID6 on 2 and 8TB drives for many years, currently with 18TB drives,
The bigger the drive, the longer the rebuild (and time needed to complete a scrub/consistency check). Because of that, I try to limit my data hoarding and don't buy way bigger drives than I really need. I'm currently on 8 TB drives, which take nearly a day.

I'd still recommend Mini-ITX in a shoebox or mini-tower instead of a NUC with external add-ons, because cables get stepped on and break.
Agreed. I like a good mini-ITX case and strongly disfavor external drives.

There used to be a time when you could get mobile-on-desktop only in NUC format, but that's gotten better recently, albeit somtimes only with shipping directly from China.
For ECC memory support, your options are further restricted. I think Alder Lake-N is the value king, in this regard (again, you must confirm the OEM exposed the option to enable IB-ECC).

Please send us a link to your BGA Ryzen board that claims to support ECC memory.
 
For ECC memory support, your options are further restricted. I think Alder Lake-N is the value king, in this regard (again, you must confirm the OEM exposed the option to enable IB-ECC).

Please send us a link to your BGA Ryzen board that claims to support ECC memory.
It's the "Topton AMD Ryzen 7 5825U 8-Bay NAS motherboard", evidently Aliexpress links evidently won't work across continents, but it should pop up soon enough just entering "Ryzen NAS" on Aliexpress.

Here is a picture:
S54d42476573249b2bcd6ded9f4c3f1bcj.jpg

It actually disclaims ECC support so I didn't really expect it either. But contrary to the normal Ryzen 5800U, the 5825U per AMD specs seems to support ECC memory at the chip level, even without the typical "Pro" moniker.

And the BIOS on the board has the usual menues for ECC support, which I know (and use) from (on) my various Ryzen desktop systems.

As to whether they actually work and do what they are supposed to, I'll have to wait until tomorrow, the package is en route, this vendor has good prices and availability (sold the 64GB DDR4-3200 non-ecc for €100), but evidently took a little longer to get things going after my order last Wednesday.

So if I am reasonably sure ECC does in fact work, I'll keep the rather expensive ECC kit (€300 instead of €100 for non-ECC), otherwise I'll send it back and make do without.

I primarily bought it as a test replacement for N5005 Atom boards I use in one of my Proxmox clusters for functional testing were none of the systems have ECC support, currently. And there the main motivation was that it allows in in-place upgrade with a pure mainboard swap at practically the same price as the original N5005.

All the extra SATA ports, the dual 2.5 Gbit NICs and the easy upgrade to 10GBase-T via one of my pile of Aquantias, were bonus incentives, but would open up a much wider use case field: if ECC is supported.

With that as a working option, it would be a rather attractive under dog board.

Prices: with Aliexpress they are typicall all over the place. They always start with a truly ridiculous list price (€412) and then have a "real" price at around 50% of that (€277). There is always coupons around, on the site or elsewhere, so you better catch and add anything you find. Some work, others won't. Aliexpress strives to make first-time customers happy, so I throw a fresh e-mail at them and then they try to compensate for EU VAT and shipping, which here results in another deduction once an item is in the shopping basket. Customs then actually rarely charges the full VAT or other tariffs, instead adds a handling fee and 3 days of delay. So I got mine (including the fan) for €191 and paid another €10 for customs on delivery.

If I click on the existing order and try to order again, it will go to €250 initially and then start deducting taxes etc. so I'll probably would have to dig up a new e-mail or wait until they have another sale, typically every other week or so.

That also got me the 8845 for €372 instead of €422 or €1055, the latter the first official list price. I had to pay another €12 for customs/taxes/tariffs on that.

I'd prefer normal and stable prices, but evidently this is very Chinese and with those differences even I can be made to play along.
 
In case of both the Phoenix (7xxx) and Hawk-Point (8xxx) APUs not even the "Pro" version seem to support ECC memory.

Same with Strix Point, probably Strix Halo, ECC support may have been dropped on all APUs with the transition to DDR5... which is a crime against prosumers, I'd say.
 
  • Like
Reactions: bit_user
It actually disclaims ECC support so I didn't really expect it either. But contrary to the normal Ryzen 5800U, the 5825U per AMD specs seems to support ECC memory at the chip level, even without the typical "Pro" moniker.
Here's a board I found on the US site:

"Ultra Thin AMD Ryzen R5 R7 5825U 17 * 17 Mini Itx , All-In-One Industrial Control NAS Motherboard"
https://www.aliexpress.us/item/3256808369963564.html

It's listed as out-of-stock, unfortunately.

No statement is made about ECC, either way.

I do like how it's Thin min-ITX, like my Jetway board. The weird thing about my Jetway board is it only came with a normal-height I/O shield. I had to cut it down, using snips, and then straighten it out with pliers. When we asked the manufacturer about a Thin-height I/O shield, they said none existed for it and would only make them if we placed a large order.

I'd prefer normal and stable prices, but evidently this is very Chinese and with those differences even I can be made to play along.
Back when I placed the one-and-only AliExpress order I ever made, right before tariffs kicked in, I don't recall any special discounts. Maybe they figured the looming tariffs were enough incentive for people to order stuff.
 
which is a crime against prosumers, I'd say.
Nah, I think the answer is much simpler. They make the Pro chips for corporate laptops, primarily, and perhaps some corporate-oriented mini-PCs. Probably too few of the laptops featuring the old Pro APUs actually featured ECC memory. I had trouble finding one, when I looked.

So, if almost nobody is using a feature, and it uses up valuable socket real estate, why carry it forward? Plus, with DDR5 having on-die ECC, maybe enough of their corporate clients were satisfied to go without it (although I personally think that's a mistake).
 
Nah, I think the answer is much simpler. They make the Pro chips for corporate laptops, primarily, and perhaps some corporate-oriented mini-PCs. Probably too few of the laptops featuring the old Pro APUs actually featured ECC memory. I had trouble finding one, when I looked.

So, if almost nobody is using a feature, and it uses up valuable socket real estate, why carry it forward? Plus, with DDR5 having on-die ECC, maybe enough of their corporate clients were satisfied to go without it (although I personally think that's a mistake).
AMD has long had a strong presence in the embedded market where ECC support was soften requried.

That's why I found it a little hard to believe, that they'd really eliminated that, especially since it's so few extra transistors and they likely own IP blocks and won't have to license support.

So I just had a closer look (you make me do that...) and ECC support still lives, partially in DDR5 APUs.
It is tied to a specific "socket" format, FP7r2, in Phoenix, and Hawk Point seems to have been a refresh in FP7 and FP8 only.

Strix Point also has matching Pro variants with ECC support but Strix Halo is strictly 256-bit LP-DDR5x, and I believe all existing implementations soldered RAM, not even LP-CAMM2.

I've never seen ECC support on LP-DDRany RAM, not sure it's a supported standard.

So looking at Hawk Point, only, there was no ECC support to be found anywhere, but that was really just a refresh of Phoenix, where the FP7r2 based Pro variants had it.

To confuse things further, there is a Hawk Point Refresh, even with the Pro moniker, but since that supports only the FP8 socket format, there is no ECC support, DDR or LP-DDR.

I have no idea at which point AMD decides to package dies into FP7, FP7r2 or FP8 chips, so it's hard to guess if leftover APUs originally intended for the laptop and NUC market might actually be converted into Pro units on FP7r2.

That would decide if and when low-cost µ-server variants might eventually appear on places like Aliexpress.
 
Here's a board I found on the US site:
"Ultra Thin AMD Ryzen R5 R7 5825U 17 * 17 Mini Itx , All-In-One Industrial Control NAS Motherboard"​
Dead link on this side of the Atlantic. And I've also occasionally encountered a strong market segmentation on Aliexpress even within Europe, e.g. really good offers available in Spain and France but no in Germany. They are sailing very close to the wind...
It's listed as out-of-stock, unfortunately.

No statement is made about ECC, either way.

I do like how it's Thin min-ITX, like my Jetway board. The weird thing about my Jetway board is it only came with a normal-height I/O shield. I had to cut it down, using snips, and then straighten it out with pliers. When we asked the manufacturer about a Thin-height I/O shield, they said none existed for it and would only make them if we placed a large order.
Those thin Mini-ITX boards and their I/O shields died with the monitors, which supported sliding them inside. The market went with the NUCs instead, which is probably better: that computer-in-monitor is a typical fuity cult invention, which resulted mostly in extra e-waste.

Even my "pseudo NUC" chassis are just a tad taller than the normal I/O shield and I use that space for SATA-SSDs, cables and better cooling. They came as a full metal chassis with Pico-PSU style power adapters and ready for 60 Watts of wall power at a very affordable price and even looked kind of cool. They were perfect for the fully passive N5005 Atoms, while a smaller form factor probably would have implied a fan. Together with SATA-SSDs they allowed for a 24x7 VM server cluster that was dead silent and produced very little heat.

My first exposure to NUCs scared me because at default settings their fan noises were intolerable, but when nothing faster and still passive ever happened, I decided to try my luck with NUCs again. With the fan curves properly adjusted that turned out ok, and obviously still much faster, being all P-cores and using 10Gbase-T NICs on TB3 ports.

Now with these new Mini-ITX boards I can recycle those slightly larger cases and e.g. mount a good Noctua NH-L9i cooler on the 8845, which allows that to go through its 65/54/45 Watt peak power settings without ever becoming a nuisance or even noticeable.

I'm not stacking those systems directly, instead I found a cheap metal grill shoe rack on Amazon, that now holds 9 small systems, including two Proxmox clusters, instead, and fits easily under my desktop, together with various workstation towers on both sides.
Back when I placed the one-and-only AliExpress order I ever made, right before tariffs kicked in, I don't recall any special discounts. Maybe they figured the looming tariffs were enough incentive for people to order stuff.
Chinese love coupons and Aliexpress had them from the start. There is a Spanish site in my bookmarks, which track a lot of the Chinese market offers together with batches of coupon codes. So I've always known about them, but only once I started actually purchasing there, I learned how useful they can be.
 
I've never seen ECC support on LP-DDRany RAM, not sure it's a supported standard.
DDR5 SODIMMs can support it. So, if you can find any LPDDR5 SODIMMs, then yeah. However, I'm not sure there are any of the LP variety.

I have no idea at which point AMD decides to package dies into FP7, FP7r2 or FP8 chips, so it's hard to guess if leftover APUs originally intended for the laptop and NUC market might actually be converted into Pro units on FP7r2.

That would decide if and when low-cost µ-server variants might eventually appear on places like Aliexpress.
Well, I assume Wildcat Lake will support IB-ECC. That might be interesting, even though I'd be happier with just Skymont or Darkmont E-cores.
 
Those thin Mini-ITX boards and their I/O shields died with the monitors, which supported sliding them inside.
This is not true. I just sent you a link (which I understand you can't view), but It's also Thin mini-ITX. So is that Radxa Orion O6 board, with the 12-core Arm SoC, that we've previously discussed.

Even my "pseudo NUC" chassis are just a tad taller than the normal I/O shield
I have a Thin mini-ITX case:

That's the whole reason I had to cut the I/O shield. Granted, it's an older model, but it's still available.

For some reason, the Thin mini-ITX boards I've seen all seem aimed at least partially at the industrial sector. I'm guessing that's where the form factor found some traction.

They came as a full metal chassis with Pico-PSU style power adapters and ready for 60 Watts of wall power at a very affordable price and even looked kind of cool.
All of the Thin mini-ITX boards I've seen have DC barrel or USB-C power connectors in the I/O block.

I'm not stacking those systems directly, instead I found a cheap metal grill shoe rack on Amazon, that now holds 9 small systems, including two Proxmox clusters, instead, and fits easily under my desktop, together with various workstation towers on both sides.
My Thin mini-ITX case is in a vertical orientation, though you can also use it horizontally. I do like the "slender tower" aspect ratio. I even tried to search for the same, in an ATX case, but that's another topic entirely.
 
I've started using ReFS on top with checksums even on the hardware RAID mostly for ZFS like extra integrity and because it doesn't seem to be slower.
If you're only using it for checksums it's probably fine, but the key feature which makes it match ZFS still likely doesn't work right: https://www.reddit.com/r/DataHoarde...ing_refs_data_integrity_streams_corrupt_data/
I've never seen ECC support on LP-DDRany RAM, not sure it's a supported standard.
The Grace CPU/GPU thing from nvidia supports ECC on the LPDDR5X, but that's the only time I can remember seeing it. I can't say for sure how it's enabled, but they have non-standard memory sizes so I'm guessing some of the memory layers in each package are reserved.
Here's a board I found on the US site:

"Ultra Thin AMD Ryzen R5 R7 5825U 17 * 17 Mini Itx , All-In-One Industrial Control NAS Motherboard"
https://www.aliexpress.us/item/3256808369963564.html
https://www.aliexpress.us/item/3256808907332807.html

It's this board, and the listing specifically says no ECC support.
 
  • Like
Reactions: bit_user
If you're only using it for checksums it's probably fine, but the key feature which makes it match ZFS still likely doesn't work right: https://www.reddit.com/r/DataHoarde...ing_refs_data_integrity_streams_corrupt_data/

The Grace CPU/GPU thing from nvidia supports ECC on the LPDDR5X, but that's the only time I can remember seeing it. I can't say for sure how it's enabled, but they have non-standard memory sizes so I'm guessing some of the memory layers in each package are reserved.

https://www.aliexpress.us/item/3256808907332807.html

It's this board, and the listing specifically says no ECC support.
Well, I got 2 Kingston DDR4-3200 ECC modules (part number 9965657-046.A00G), put them in, could then enable ECC, including scrubbing in the BIOS, and at least HWinfo reports it as present and active.

I even enabled error injection in the BIOS...

As with all desktop Zens, whether or not ECC is actually working is hard to find out, AMD wants you to buy EPYC CPUs at extra cost.

MemTest86 reports ECC support in the SPD info, but not from the system. More specifically I could not activate ECC error injection from MemTest86.

According to PassMark that is disabled by default on anything not an engineering sample...

And it's the same with all my Zen workstations, all of which are equipped with ECC DIMMs, some DDR4 some DDR5.

I've had Rowhammer have a go, it it couldn't hammer, I guess I'd have to compare it with the other non-ECC DIMMs to see the difference. According to the Israeli researchers who originally published on Rowhammer, not even ECC helps any more on DDR5: density costs security...

On ECC with LP-DDRx I found this reference, which explains why only inline-ECC makes sense there.

First tentative conclusion: pretty sure correctable errors will be corrected on this board with ECC DIMMs. But reporting, at least on certain classes of errors, may not be there, ...which is important.

Is it important enough to go EPYC? That's a personal choice.

In my case the extra €200 for a bit of peace of mind will have to do. It's what I've been running in the home-lab for two decades, and so far I've not lost data to RAM errors, only my own fat fingers.

I'm running through various Linux distros to see if they do better on the reporting, so far POP!OS and Bazzite work great in terms of UI and even Steams games support, but so la la on the ECC front.

Proxmox naturally does a bit better on the latter, works exactly how you'd expect on my Xeons and even on the Zen workstation, this board will be next.
 
I've had Rowhammer have a go, it it couldn't hammer, I guess I'd have to compare it with the other non-ECC DIMMs to see the difference. According to the Israeli researchers who originally published on Rowhammer, not even ECC helps any more on DDR5: density costs security...
Like I said, some CPUs have hardware mitigations against Rowhammer. You should look up details on that, before you infer anything about what such a test tells you about your RAM.

Secondly, I believe PassMark's Rowhammer test is not designed to fail, under normal circumstances. I've never seen it fail. I think they didn't go as far as they could've, since most people infer Memtest failures as an indication of defective/failed memory and therefore it wouldn't make sense to put a properly strenuous implementation in a mass-market consumer tool like that.

Once you're certain your CPU & microcode contain no mitigations, I'd suggest looking for a proper rowhammer sample code.

First tentative conclusion: pretty sure correctable errors will be corrected on this board with ECC DIMMs. But reporting, at least on certain classes of errors, may not be there, ...which is important.
You don't even know the wires are connected! The manufacturer says it doesn't support ECC, so you really should find an experimental way of verifying that, if you care about it. Like I said, there ought to be someone out there with a bad DIMM. Otherwise, I wonder if you could make one by painting an insulating material on one of the DIMM's data pads. Since it's DDR4, you could buy the smallest, oldest, slowest, used DIMM you can find for this experiment.
 
  • Like
Reactions: thestryker
You don't even know the wires are connected! The manufacturer says it doesn't support ECC, so you really should find an experimental way of verifying that, if you care about it. Like I said, there ought to be someone out there with a bad DIMM. Otherwise, I wonder if you could make one by painting an insulating material on one of the DIMM's data pads. Since it's DDR4, you could buy the smallest, oldest, slowest, used DIMM you can find for this experiment.
SODIMM to UDIMM adapter + Passmark's ECC Tester! (I actually didn't even know that tester was a thing and I'm a little surprised they didn't make one for DDR5)
 
  • Like
Reactions: bit_user
Like I said, some CPUs have hardware mitigations against Rowhammer. You should look up details on that, before you infer anything about what such a test tells you about your RAM.

Secondly, I believe PassMark's Rowhammer test is not designed to fail, under normal circumstances. I've never seen it fail. I think they didn't go as far as they could've, since most people infer Memtest failures as an indication of defective/failed memory and therefore it wouldn't make sense to put a properly strenuous implementation in a mass-market consumer tool like that.
Passmark are using the original 2015 rowhammer test, nothing of their own design.

And yes, I've seen it fail once, a bit of a nasty shock on a Kaveri A10-7850K DDR3-2400 system and very much a direct ancestor of thise one, which had passed everything else.

But no, evidently Rowhammer can't be used as an ECC error injection replacement, which isn't all bad, considering what can of worms it is.

Others seem to have used RAM overclocking or undervolting for a similar purpose, but I'm giving up on following down that road, it's starting to eat the savings.
You don't even know the wires are connected! The manufacturer says it doesn't support ECC, so you really should find an experimental way of verifying that, if you care about it. Like I said, there ought to be someone out there with a bad DIMM. Otherwise, I wonder if you could make one by painting an insulating material on one of the DIMM's data pads. Since it's DDR4, you could buy the smallest, oldest, slowest, used DIMM you can find for this experiment.
At this point further effort will just dillute the value of the system, I'll just have to live with the fact that it doesn't do ECC in measurable and reliable manner: which is still fine, just not for a full NAS use case, where that data might come back is or critical.

E.g. it may still be fine for a local downstream cache of centrally managed data or a Steam cache.

It's a bit of a shame, because there were many indications that this one might have escaped AMD's efforts to keep the "Pro" population away from affordable consumer hardware, but evidently it either doesn't work in the end or requires fixing either the BIOS or Linux edac modules to override the BIOS settings.

The Barceló APUs are a bit of an oddity, because they all have the typical Pro features, remote manegement with key management and memory encryption as well as ECC support. Evidently AMD felt compelled to deliver a pseudo refresh for them, e.g. for a huge customer rollout, but didn't want to refresh their entire APU line with Pro and non-Pro variants.

The Cezannes don't have any "Pro" variants, the first Barcelós were really just a batch of "Pro" sold as consumer variants and the only 'real' "Pro"s came another year later as Barceló-R but with 7xxx numbers, to everybody's delight.

Contrary to many other Topton designs, this one doesn't come with a variety of APUs to choose from: where those might have BIOS support for everything that sort of fits, this one seems to always have been 5825 exlusive, no 5800U option or similar where leaving out those extra 8 DIMM to FP6 traces might have made sense.

Well, I guess we'll ever know unless those Chinese engineers start talking ...and we understand what they are saying.

It's been fun exploring, but now I'll just return those €300 ECC DIMMs and call it a day.

It's still a pretty cool machine even an impressive desktop.
 
  • Like
Reactions: thestryker
At this point further effort will just dillute the value of the system, I'll just have to live with the fact that it doesn't do ECC in measurable and reliable manner: which is still fine, just not for a full NAS use case, where that data might come back is or critical.
Here's the AMD motherboard I'm using:

It has real ECC support. It's micro-ATX, but they also make mini-ITX boards, as well as some slightly cheaper models based on the B-series chipsets.

One downside I've discovered is that the heatsinks it uses are designed for server-grade airflow. When used in a desktop case, some people find the X570 chipset getting uncomfortably hot or even overheating. I've taken the step of replacing this and its LAN heatsink, but I also use a top-down CPU cooler that provides some residual airflow for the VRMs, DIMMs, and various heatsinks on the board.

It's still a pretty cool machine even an impressive desktop.
Yeah, that's what I thought about the N97. I've talked a lot about the $220 ODROID-H4 Ultra, which features the N305, but if you just wanted a N97 system (which also supports IB-ECC), the base ODROID-H4 sells for $99.


It even supports passive cooling, if you don't run it in "Unlimited Mode".

The desktop environment on my N97 feels snappy enough. It runs Google Earth pretty smoothly, on my 1440p monitor.

As a server, its only real downsides are that it has just one NVMe 3.0 x4 and USB 3.0 for expansion. The Plus and Ultra models have 4x SATA (but the M.2 slot drops to x2, if you enable the SATA controller). Built-in Ethernet is limited to 2.5G, which is enough for me.

In order to exceed these capabilities, I don't really mind stepping up to a proper socketed motherboard. I have the impression that you're overly focused on Chinese mini-PCs and not thinking expansively enough, when seeking an appropriate solution for your various needs.
 
Last edited:
  • Like
Reactions: thestryker
35Watt at the wall for a full system with disks isn't that bad, even with a CPU at 2-3 Watts on idle it's hard to go much lower with peripherals and storage.
It might not be too bad if I had a bunch of spinning rust in the system, but for a SATA-SSD, two NVMe drives and pretty idle workload, it's pretty horrible.

And btw. if you're bent on TrueNAS you can run that also on Proxmox and when you pass-through the SATA controller, speed will be like native. I run a Univention file and NextCloud server that way, too, but you need to watch PCIe iogroup granularity (e.g. can't just pass-through a single SATA port).
Not sure why I would want Proxmox as a base OS – it's additional complexity, and while hypervisor-based VMs add minimal overhead, they still add overhead. Also, you really, really, REALLY only want to add storage on a dedicated PCIe passthrough device, anything else has a too high risk of introducing silent data corruption. This would mean adding a PCIe HBA (costly, and high additional power usage), and either be confined to SATA storage or add a *lot* of cost and power consumption for high-end SAS, or oculink/whatever NVMe/M.x/U.x-class stuff. It's also pretty wasteful dedicating right-sized VM resources to a TrueNAS VM.

And given that TrueNAS has been able to do containers and VMs for a while (even if it's an in-progress feature), I really don't see why anybody would want to do this. If you need a lot of compute workload, you should be doing that on a separate machine.

The main reason I'm pretty focused on TrueNAS is I want zfs – I'm at a point where I don't have any faith in other (consumer-available) filesystems to work well enough with regards to data integrity. The secondary reason is that, while it's been fun and a great learning experience to much around with FreeBSD and Linux distributions ranging from RedHat, Debian, Slackware, Gentoo and Arch for the previous decades, I want a base system that's "boring", has ZFS support baked-in (and as a main feature rather than Just An Additional Patchset We Need To Manage), and can do a few easy containerized workloads.

I'd still recommend Mini-ITX in a shoebox or mini-tower instead of a NUC with external add-ons, because cables get stepped on and break. There used to be a time when you could get mobile-on-desktop only in NUC format, but that's gotten better recently, albeit somtimes only with shipping directly from China.
I've seen youtubers do "cute" NAS builds using mini PCs, with modifications the look way too flimsy – I was thinking more like moving the guts to a larger case, or at least doing some dremel + 3d-printing that's more structurally sane...

But anyway, the NUCs and other mini-PCs all seem to be gimped in one way or another that gets them **close** to being nice, but miss the mark. And as soon as you move to regular components, regardless of whether you choose mini-ITX boards and low-end CPUs, the power budget explodes.

There's some interesting Frankenstein boards on AliExpress, but that's a bit too much of a gamble for data storage... I want that system to be stable, and be serviceable (either through spare parts, or the ability to get a new similar-specced full replacement machine within a couple of days).

Apart from the Frankenstein boards, there's several mini-PCs (or custom NAS systems) the look sort of interesting... The UGreen flash NAS, the Asustor Flashstor, the recent Beelink ME mini... but they always have some ridiculous misbalance of capabilities/connectivity.
 
  • Like
Reactions: bit_user
And as soon as you move to regular components, regardless of whether you choose mini-ITX boards and low-end CPUs, the power budget explodes.
A lot has to do with motherboard design and low power just not being a thing companies are concerned with for socketed CPUs.

Here's an interesting document I came across when I was looking to design my server box which is a compilation regarding low power consumption: https://docs.google.com/spreadsheets/d/1LHvT2fRp7I6Hf18LcSzsNnjp10VI-odvwZpQZKv_NCI/edit?gid=0#gid=0

Which comes from here: https://www.hardwareluxx.de/community/threads/die-sparsamsten-systeme-30w-idle.1007101/
There's some interesting Frankenstein boards on AliExpress, but that's a bit too much of a gamble for data storage... I want that system to be stable, and be serviceable (either through spare parts, or the ability to get a new similar-specced full replacement machine within a couple of days).

Apart from the Frankenstein boards, there's several mini-PCs (or custom NAS systems) the look sort of interesting... The UGreen flash NAS, the Asustor Flashstor, the recent Beelink ME mini... but they always have some ridiculous misbalance of capabilities/connectivity.
If you aren't looking for a ton of storage something like this could be a viable choice: https://www.hardkernel.com/shop/odroid-h4-plus/

If money was no object Xeon D comes into play, or maybe some Atom based boards.

I was disappointed by what was available because everything has big compromises unless you get one of the NAS boards off of AliExpress. I don't know of any of those having ECC support though which ruled them out for me. In the end I just basically tossed up my hands and put together something that didn't use more power than the server which was being replaced (~95W).
 
Last edited:
It might not be too bad if I had a bunch of spinning rust in the system, but for a SATA-SSD, two NVMe drives and pretty idle workload, it's pretty horrible.
When I bought my Mini-ITX Atoms around 2016, €120 didn't buy you a lot of computing power or connectivity.
For €200 of current money, the capabilities of a Zen APU base system is so much bigger, that I find it hard to justify spending that same amount on Intel's newest N350 for much less functionality and power.

Improved technology should be able to give you the same functionality as the old Atom at now only €50-70, but at those prices logistics eat all revenue and nobody can sell a non-garbage product.

The bottom end of the market is getting very crowded.
Not sure why I would want Proxmox as a base OS – it's additional complexity, and while hypervisor-based VMs add minimal overhead, they still add overhead.
It's a result of the minimal useful quality system having grown so much in capabilities, it seems far too wasteful to just run a file server. Of course, virtualization has been part of my job since 1999, so for me it's just the basic Lego building block. But it allows me also to deploy satellite infra to my kids as they are moving out, where fault tolerance can be managed at the core (my home-lab), while local caching and security is available to them where they live.

With the €200 base board, I can give them a full pfSense appliance, IAM, file service, groupware and plenty of other potential appliances or functionalities with a level of independence and stability that almost approaches my home-lab but from a single box.

The overhead for virtualization used to be mostly in I/O, but when [consumer] networks are 10Gbit/s at best, your bottlenecks are elsewhere, paravirtualized drivers have mostly eliminated the overhead vs. containers. I actually used pass-through SATA mostly because it was so easy to do and eliminated even the theoretical overhead.

On the pfSense appliance, passing through those Intel NICs to the VM most likely will make a difference because it uses the Intel offload blocks directly and I don't have any other use for those 2.5GBit ports anyway, since the main East-West connection will be 10Gbit.

In my case it will move a Kaby Lake i7-7700T (35 Watt TDP) appliance, that has faithfully served me for many years into a VM saving quite a few Watts without a negative performance impact.
Also, you really, really, REALLY only want to add storage on a dedicated PCIe passthrough device, anything else has a too high risk of introducing silent data corruption. This would mean adding a PCIe HBA (costly, and high additional power usage), and either be confined to SATA storage or add a *lot* of cost and power consumption for high-end SAS, or oculink/whatever NVMe/M.x/U.x-class stuff. It's also pretty wasteful dedicating right-sized VM resources to a TrueNAS VM.
I'm not sure I follow: there is no difference between passing the on-board SATA controller or any other plugged into PCIe...
Proxmox and the VMs run off NVMe, only HDDs and perhaps some SATA-SSD leftovers would run on the SATA controller, so the VM gets exclusive control, the base Proxmox won't see the SATA controller nor the disks on it.

And again, with paravirtualized drivers I/O overhead for SATA based block devices is so low, I'm not sure it makes much of a difference... with perhaps the exception of running the heavily tuned checksumming logic for ZFS or RAID on a virtual block device.

Because it runs fine grained minimal block sized accesses to the physical hardware, that case is much like the low level NIC access required to make pfSense sing with NIC offloading and is thus a candidate for pass-through.
And given that TrueNAS has been able to do containers and VMs for a while (even if it's an in-progress feature), I really don't see why anybody would want to do this. If you need a lot of compute workload, you should be doing that on a separate machine.
FreeBSD has supported chroot and jails for ages and I guess a hypervisor has been available for the longest time, too. But there is a big difference between having VMs and containers managed via a central GUI across a farm of machines and potentially even via agents (like on vSphere or oVirt) and a local shell interface.

What I've done professionally over the last decades is how hyperscalers started, too. I've consolidated functionalities from distinct servers on a virtualized host and I've then spread those workloads to cover fault-tolerance and scale.

And in home-use, lab or not, scale hasn't justified distinct machines for a long time, while fault tolerance doesn't need to be everywhere or for everything.
The main reason I'm pretty focused on TrueNAS is I want zfs – I'm at a point where I don't have any faith in other (consumer-available) filesystems to work well enough with regards to data integrity. The secondary reason is that, while it's been fun and a great learning experience to much around with FreeBSD and Linux distributions ranging from RedHat, Debian, Slackware, Gentoo and Arch for the previous decades, I want a base system that's "boring", has ZFS support baked-in (and as a main feature rather than Just An Additional Patchset We Need To Manage), and can do a few easy containerized workloads.
I very much agree. Except even TrueNAS no longer believes you need BSD for proper ZFS.

Proxmox is Debian based and has extra ZFS boot support carefully added in: the company just loves it, even if it doesn't support scale-out or server fault tolerance like CEPH. But since they support CEPH and ZFS, you can manage and choose very easily what you put where and with backup management (with storage snapshots or VM suspends) being part of the GUI automation, you have even more options.

Proxmox delivers a true and vast superset of everything that TrueNAS does on Linux, except sell pre-configured hardware. I've known the company probably for almost 20 years and level headed people like Wendel and STH seem to have come to similar conclusions.

They used to be somewhat inferior in terms of automation to products like Nutanix, vSphere, RHV/oVirt, XenServer or XCP-ng, but their relative primitiveness helped them survive, while their quality control is top notch.

IMHO TrueNAS has become the Intel of NAS.
I've seen youtubers do "cute" NAS builds using mini PCs, with modifications the look way too flimsy – I was thinking more like moving the guts to a larger case, or at least doing some dremel + 3d-printing that's more structurally sane...
I got plenty of old cases and even a usable mini-tower can be had for €30. My DIY is strictly screws and plastic straps and I like my cases solid and somewhere where I don't see them.

These µ-server universal appliances are likely to be stuck in a cellar or attic somewhere, where the main challenge is survival with dust and general neglect: my kids are into gaming not computers.
But anyway, the NUCs and other mini-PCs all seem to be gimped in one way or another that gets them **close** to being nice, but miss the mark. And as soon as you move to regular components, regardless of whether you choose mini-ITX boards and low-end CPUs, the power budget explodes.
That's why I no longer try to get to 5 Watts at the wall, but accept 10-20 Watts idle at the wall when it means I can add HDDs or USB3 hardware and keep using a nice beQuiet 400 Watt Gold power supply I bought before GPU power consumption went through the roof.

I've played with PicoATX power supplies, but then in combination with the external 60/90/120 Watt bricks they didn't really beat the beQuiets at efficiency.

And nothing is worse than realizing after days that the stability problems you were hunting were due to power starvation.
There's some interesting Frankenstein boards on AliExpress, but that's a bit too much of a gamble for data storage... I want that system to be stable, and be serviceable (either through spare parts, or the ability to get a new similar-specced full replacement machine within a couple of days).

Apart from the Frankenstein boards, there's several mini-PCs (or custom NAS systems) the look sort of interesting... The UGreen flash NAS, the Asustor Flashstor, the recent Beelink ME mini... but they always have some ridiculous misbalance of capabilities/connectivity.
I might have shared your prejudice about "Frankenstein" boards a few years ago.

But when an Erying G660 offered an Alder Lake i7-12700H with a full Mini-ITX board for something like €350 a few years ago, I decided to simply risk it and give it a try.

It turned out a slight challenge because of the shim and it's backplate they included to have the mobile chip fit a socket 1700 cooler, but the rest of it was just top quality. The only failure reports I found on the internet came when people tried to run it at 120 Watts PL1, when the design is based on 45 Watts.

And it's been the same with the Topton boards I've tried since: the hardware quality is really top notch, nothing Frankenstein or quality compromised that I could see.

Also Aliexpress is quite relentless about custumer experience over vendor happiness: if you're not happy and return the board, you'll get your money back, no problem at all. I've even had Aliexpress refund my money, long before the vendor had any chance to receive the hardware.

You can see the pressure these hardware vendors have to live with and customer support is eager, curtious and as helpful as they can be.

Aliexpress uses algorithms on them as rigorously on them, as they'll use them on you. So if you return more than a fixed number of items per month (no matter if justified), you'll be thrown off their platform.

BIOS updates are another matter: they just don't exist and if you're worried about Spectre/Meltdown/TPU fixes, don't go Chinese. I've also never downloaded drivers from those Chinese vendors, nor would I still buy mobile phones or Android boxes from them for security reasons.

My main prejudice today is that they don't seem to manage software as well as they design hardware

But nobody sells enterprise class replacement part guarantees for €200 so you need to either pick your poison, or just buy an extra one as spare part... at that price.
 
Last edited:
If money was no object Xeon D comes into play, or maybe some Atom based boards.
I have one Xeon-D 1541 given to me as a corporate surplus, which I hold largely responsible for getting me on this µ-server trainride: I needed a total of three to implement fault tolerance with a proper quorum.

It's an 8 core Broadwell part from SuperMicro with 45 Watts of TDP, 64GB of ECC DDR4 and an ASmedia ILO/RSA adapter, but only Gbit Ethernet ports in my case.

These days it may actually struggle to beat an N350. And while it's run 24x7 for many years without issues, by now it's perhaps more likely to fail than a new part: electronics actually do decay, but Xeon-D prices never did.

The giant financial gap between those Xeons and the Atoms has been a constant sting in my flesh and AMD Pro APUs the evident solution, ...that never quite came easy to obtain.
I was disappointed by what was available because everything has big compromises unless you get one of the NAS boards off of AliExpress. I don't know of any of those having ECC support though which ruled them out for me. In the end I just basically tossed up my hands and put together something that didn't use more power than the server which was being replaced (~95W).
Yep, I guess we're all looking for something very similar and the market sure doesn't want to give that at our price.
 
  • Like
Reactions: thestryker