It might not be too bad if I had a bunch of spinning rust in the system, but for a SATA-SSD, two NVMe drives and pretty idle workload, it's pretty horrible.
When I bought my Mini-ITX Atoms around 2016, €120 didn't buy you a lot of computing power or connectivity.
For €200 of current money, the capabilities of a Zen APU base system is so much bigger, that I find it hard to justify spending that same amount on Intel's newest N350 for much less functionality and power.
Improved technology should be able to give you the same functionality as the old Atom at now only €50-70, but at those prices logistics eat all revenue and nobody can sell a non-garbage product.
The bottom end of the market is getting very crowded.
Not sure why I would want Proxmox as a base OS – it's additional complexity, and while hypervisor-based VMs add minimal overhead, they still add overhead.
It's a result of the minimal useful quality system having grown so much in capabilities, it seems far too wasteful to just run a file server. Of course, virtualization has been part of my job since 1999, so for me it's just the basic Lego building block. But it allows me also to deploy satellite infra to my kids as they are moving out, where fault tolerance can be managed at the core (my home-lab), while local caching and security is available to them where they live.
With the €200 base board, I can give them a full pfSense appliance, IAM, file service, groupware and plenty of other potential appliances or functionalities with a level of independence and stability that almost approaches my home-lab but from a single box.
The overhead for virtualization used to be mostly in I/O, but when [consumer] networks are 10Gbit/s at best, your bottlenecks are elsewhere, paravirtualized drivers have mostly eliminated the overhead vs. containers. I actually used pass-through SATA mostly because it was so easy to do and eliminated even the theoretical overhead.
On the pfSense appliance, passing through those Intel NICs to the VM most likely will make a difference because it uses the Intel offload blocks directly and I don't have any other use for those 2.5GBit ports anyway, since the main East-West connection will be 10Gbit.
In my case it will move a Kaby Lake i7-7700T (35 Watt TDP) appliance, that has faithfully served me for many years into a VM saving quite a few Watts without a negative performance impact.
Also, you really, really, REALLY only want to add storage on a dedicated PCIe passthrough device, anything else has a too high risk of introducing silent data corruption. This would mean adding a PCIe HBA (costly, and high additional power usage), and either be confined to SATA storage or add a *lot* of cost and power consumption for high-end SAS, or oculink/whatever NVMe/M.x/U.x-class stuff. It's also pretty wasteful dedicating right-sized VM resources to a TrueNAS VM.
I'm not sure I follow: there is no difference between passing the on-board SATA controller or any other plugged into PCIe...
Proxmox and the VMs run off NVMe, only HDDs and perhaps some SATA-SSD leftovers would run on the SATA controller, so the VM gets exclusive control, the base Proxmox won't see the SATA controller nor the disks on it.
And again, with paravirtualized drivers I/O overhead for SATA based block devices is so low, I'm not sure it makes much of a difference... with perhaps the exception of running the heavily tuned checksumming logic for ZFS or RAID on a virtual block device.
Because it runs fine grained minimal block sized accesses to the physical hardware, that case is much like the low level NIC access required to make pfSense sing with NIC offloading and is thus a candidate for pass-through.
And given that TrueNAS has been able to do containers and VMs for a while (even if it's an in-progress feature), I really don't see why anybody would want to do this. If you need a lot of compute workload, you should be doing that on a separate machine.
FreeBSD has supported chroot and jails for ages and I guess a hypervisor has been available for the longest time, too. But there is a big difference between having VMs and containers managed via a central GUI across a farm of machines and potentially even via agents (like on vSphere or oVirt) and a local shell interface.
What I've done professionally over the last decades is how hyperscalers started, too. I've consolidated functionalities from distinct servers on a virtualized host and I've then spread those workloads to cover fault-tolerance and scale.
And in home-use, lab or not, scale hasn't justified distinct machines for a long time, while fault tolerance doesn't need to be everywhere or for everything.
The main reason I'm pretty focused on TrueNAS is I want zfs – I'm at a point where I don't have any faith in other (consumer-available) filesystems to work well enough with regards to data integrity. The secondary reason is that, while it's been fun and a great learning experience to much around with FreeBSD and Linux distributions ranging from RedHat, Debian, Slackware, Gentoo and Arch for the previous decades, I want a base system that's "boring", has ZFS support baked-in (and as a main feature rather than Just An Additional Patchset We Need To Manage), and can do a few easy containerized workloads.
I very much agree. Except even TrueNAS no longer believes you need BSD for proper ZFS.
Proxmox is Debian based and has extra ZFS boot support carefully added in: the company just loves it, even if it doesn't support scale-out or server fault tolerance like CEPH. But since they support CEPH and ZFS, you can manage and choose very easily what you put where and with backup management (with storage snapshots or VM suspends) being part of the GUI automation, you have even more options.
Proxmox delivers a true and vast superset of everything that TrueNAS does on Linux, except sell pre-configured hardware. I've known the company probably for almost 20 years and level headed people like Wendel and STH seem to have come to similar conclusions.
They used to be somewhat inferior in terms of automation to products like Nutanix, vSphere, RHV/oVirt, XenServer or XCP-ng, but their relative primitiveness helped them survive, while their quality control is top notch.
IMHO TrueNAS has become the Intel of NAS.
I've seen youtubers do "cute" NAS builds using mini PCs, with modifications the look way too flimsy – I was thinking more like moving the guts to a larger case, or at least doing some dremel + 3d-printing that's more structurally sane...
I got plenty of old cases and even a usable mini-tower can be had for €30. My DIY is strictly screws and plastic straps and I like my cases solid and somewhere where I don't see them.
These µ-server universal appliances are likely to be stuck in a cellar or attic somewhere, where the main challenge is survival with dust and general neglect: my kids are into gaming not computers.
But anyway, the NUCs and other mini-PCs all seem to be gimped in one way or another that gets them **close** to being nice, but miss the mark. And as soon as you move to regular components, regardless of whether you choose mini-ITX boards and low-end CPUs, the power budget explodes.
That's why I no longer try to get to 5 Watts at the wall, but accept 10-20 Watts idle at the wall when it means I can add HDDs or USB3 hardware and keep using a nice beQuiet 400 Watt Gold power supply I bought before GPU power consumption went through the roof.
I've played with PicoATX power supplies, but then in combination with the external 60/90/120 Watt bricks they didn't really beat the beQuiets at efficiency.
And nothing is worse than realizing after days that the stability problems you were hunting were due to power starvation.
There's some interesting Frankenstein boards on AliExpress, but that's a bit too much of a gamble for data storage... I want that system to be stable, and be serviceable (either through spare parts, or the ability to get a new similar-specced full replacement machine within a couple of days).
Apart from the Frankenstein boards, there's several mini-PCs (or custom NAS systems) the look sort of interesting... The UGreen flash NAS, the Asustor Flashstor, the recent Beelink ME mini... but they always have some ridiculous misbalance of capabilities/connectivity.
I might have shared your prejudice about "Frankenstein" boards a few years ago.
But when an Erying G660 offered an Alder Lake i7-12700H with a full Mini-ITX board for something like €350 a few years ago, I decided to simply risk it and give it a try.
It turned out a slight challenge because of the shim and it's backplate they included to have the mobile chip fit a socket 1700 cooler, but the rest of it was just top quality. The only failure reports I found on the internet came when people tried to run it at 120 Watts PL1, when the design is based on 45 Watts.
And it's been the same with the Topton boards I've tried since: the hardware quality is really top notch, nothing Frankenstein or quality compromised that I could see.
Also Aliexpress is quite relentless about custumer experience over vendor happiness: if you're not happy and return the board, you'll get your money back, no problem at all. I've even had Aliexpress refund my money, long before the vendor had any chance to receive the hardware.
You can see the pressure these hardware vendors have to live with and customer support is eager, curtious and as helpful as they can be.
Aliexpress uses algorithms on them as rigorously on them, as they'll use them on you. So if you return more than a fixed number of items per month (no matter if justified), you'll be thrown off their platform.
BIOS updates are another matter: they just don't exist and if you're worried about Spectre/Meltdown/TPU fixes, don't go Chinese. I've also never downloaded drivers from those Chinese vendors, nor would I still buy mobile phones or Android boxes from them for security reasons.
My main prejudice today is that they don't seem to manage software as well as they design hardware
But nobody sells enterprise class replacement part guarantees for €200 so you need to either pick your poison, or just buy an extra one as spare part... at that price.