Question Expanding SATA Ports on Asus Prime B660M-A WiFi D4 for a Mini NAS Build

Mar 27, 2025
8
1
15
Helllo, I'm building a mini NAS using a Fractal Node 804, which supports Micro ATX and Mini ITX motherboards. I’ve chosen the Asus Prime B660M-A WiFi D4, as it seems to fit my build.

Some other info about the build:
  • I'll be using a 12th-gen Intel CPU for transcoding, so I won’t need a dedicated GPU.
  • The motherboard has only 4 SATA ports, which isn’t enough.
  • I’d like to expand storage to at least 10 SATA ports.
  • In the future, I might also need to add a network card (not sure if this is the right term), to be honest Im not sure if I really need it since I plan to only stream PLEX and store some data at max.
According to the official manual of the motherboard, it has:
  • 1 x PCIe 4.0/3.0 x16 slot (Intel B660 Chipset)
  • 1 x PCIe 3.0 x16 slot (supports x4 mode)
  • 1 x PCIe 3.0 x16 slot (supports x1 mode)

Based on this info I would like to use a SAS card to expand the sata ports, can you suggest me a compatible one please? Also do I need a network card?
Im new to this so I might have said somthing dumb so forgive my ignorance.
 
Fair questions - do not feel dumb for asking.

Having multiple SATA drives in a NAS is common.

However, as I understand your plans and requirements you may have 10 drives.

Drives: make, model, capacity, configuration?

Keep the power requirements in mind.....

The drives in question are these WD Ultrastar SATA Enterprise HDD 7200 RPM, 14 TB. They are refurbished. Here is the link: WD 14TB.
TBH I havent thought about the configuration, do you have suggestions?
For the PSU I was thinking a 500W from EVGA.
 
Based on this info I would like to use a SAS card to expand the sata ports, can you suggest me a compatible one please?
The drives in question are these WD Ultrastar SATA Enterprise HDD 7200 RPM, 14 TB.
Why would you need SAS controller? Are you going to use SAS HDDs?
For SATA drives you do not need a SAS card.
A regular SATA controller card will do everything you need.
Also do I need a network card?
What for? Motherboard has integrated network card.
Why would you need multiple network cards?
 
Why would you need SAS controller? Are you going to use SAS HDDs?
For SATA drives you do not need a SAS card.
A regular SATA controller card will do everything you need.

What for? Motherboard has integrated network card.
Why would you need multiple network cards?
Unfortunately it seems my knowledge is very lacking since it's based on what I read online.
I would like to expand the number of sata ports to use for 3.5"HDD. Online people recommended a SAS expander to do this. I was asking since I wasnt sure in my case.
Could you suggest me a SATA controller card?
 
@Nahiri,

Is good that you are researching, reading, and asking questions.

Continue to do so.

My question about configuration was asked to simply learn more about the specific requirements your proposed NAS build is intended to support.

One issue being RAID. Likely not needed unless you specifically know otherwise.

In many cases, more information is needed. It is indeed important about how things need to be done.

It is also important to understand what is to be done: the requirements.

Your project is ambitious just on its' own merits. Even if only for learning and experimental purposes.

Building for use in a work environment is much more demanding.

Yes: if the proposed NAS is going to simply connect to a network then all you need is the motherboard's network card.

Power: that is a bit more involved. Very likely that, as I understand the above HDD specs, you will need a quality PSU that can provide more than 500 watts. Remember the PSU must support all installed components of the proposed build. There are online calculators that can help you size the PSU. Plan for growth.

SAS controller: You mentioned needing up to 10 SATA ports. Will all 10 drives be 14TB HDD's?

What case are you planning to use? Might get a tight with respect to putting all together and then later with respect to airflows and cooling.

Consider starting a bit smaller and simpler. Then expand: 4 drives to 6 drives to 8 drives to 10 drives. Some gradual increase that you can pre-plan and work towards carrying out.

Give it all some thought based on what you have learned. You are not bound by your initial post, plans, configurations, and questions.

You can revise as necessary and move forward from there. Good project I think.

Do not be in a rush: take your time, keep reading, researching, and planning. Much easier to change a plan early on.

Difficult to anticipate what all might happen. Indeed, you do not want to get boxed in at the cost of lost money, time, and effort.

For the most part your plans are much as would be expected. The "devil is in the details".

And details matter.
 
Based on this info I would like to use a SAS card to expand the sata ports, can you suggest me a compatible one please?

An excellent idea. I have a large number of PCs and TrueNAS Core ZFS servers fitted with LSI SAS controller cards. They work equally well with SATA and SAS drives, but you can't mix SAS and SATA on the same card. I also use them with an external LTO4 SAS tape drive for backup to 800GB tapes. A very economical and professional way to expand you mobo's drive capability.

Here is an example of a suitable card (9211-8i) complete with a couple of SFF-8087 forward breakout cables. Note this card comes with a low profile bracket in addition to the full height bracket. The -8i means it can handle 8 drives, via two SFF-8087 ports at the front of the card. You can get -4i, -12i and -16i versions for fewer or more drives, but the -12i and -16i tend to be more expensive.

https://www.amazon.com/SVNXINGTII-9211-8i-6Gbps-HBA-LSI/dp/B0BY8YD1JW

71UXMvHqy0L._AC_SL1000_.jpg



One very important thing to note is these cards are available with two different versions of firmware.

1). IR (RAID)
2). IT (Initiator Target)

You'll probably need IT-mode firmware, which passes all commands and reponses transparently between the controller and the drives. In the advert for the card above, they mention IT Mode and ZFS which is the type I use.

ZFS is software RAID and needs to "see" the drives to work properly. I use TrueNAS core. N.B. This OS replaces Window, but it might be worth a look if you want a professional server OS. Core is free.
https://www.truenas.com/truenas-core/

You can buy IR (RAID) cards and re-flash them to IT (Initiator Target), but this can be difficult. It took me two days to re-flash a couple of Dell H200 cards from IR to IT. Easier cards can be re-flashed in 5 minutes. Best best is to buy IT cards, unless you want proprietary hardware RAID.

There are many other suitable cards you could use. I prefer genuine LSI cards (second hand ex-server "pulls") as opposed to Chinese clones sold on AliExpress, Amazon and eBay. I doubt you'll notice any difference in a home server system if you buy a cloned card, purporting to be LSI.
https://forums.servethehome.com/ind...and-hba-complete-listing-plus-oem-models.599/

My other reason for using SAS cards is bandwidth. A 10-drive array of 14TB drives is likely to swamp a cheap and cheerful plug-in SATA controller card, especially if it's only PCIe x1. Admittedly the older LSI cards are running PCIe Gen.2 or Gen.3, but they cope well with large arrays of hard disks in server systems.


Also do I need a network card?
It's not a bad idea to fit another network card if your motherboard is only equipped with Gigabit Ethernet. Even with fairly old hard disks, it's easy to saturate a 1Gb/s network connection during data transfers. If your mobo has 2.5Gb/s or 5Gb/s Ethernet, that should be enough for most hard disk arrays.

I've been fitting 10Gb/s XG-C100C Ethernet and 10Gb/s SFP+ Solarflare 5122 fibre optic NICs in my systems since 2018, to transfer data at high speed between NVMe drives, in addition to my slower ZFS hard disk arrays. You will of course need a spare slot in each PC plus a suitable high speed switch, but you can get away with a direct cable or fibre connection between two machines, if you program their IP addresses manually. If 10Gb/s is insufficient, you can get 25, 40 and 100Gb/s NICs.

https://www.amazon.com/ASUS-XG-C100C-Network-Adapter-Single/dp/B072N84DG6
71aNxUzYCtL._AC_SL1500_.jpg


https://www.amazon.com/Solarflare-SFN5122F-Dual-Port-Enterprise-Adapter/dp/B00U4LJD26
617xZaFm2tL._AC_SL1280_.jpg
 
I’ve chosen the Asus Prime B660M-A WiFi D4, as it seems to fit my build.
A couple of points. The B660M-A WiFi D4 is a bit "light" on expansion slots, but obviously it fits in the 804 case.

https://www.asus.com/motherboards-components/motherboards/prime/prime-b660m-a-wifi-d4/techspec/
Intel® 14th & 13th & 12th Gen Processors
1 x PCIe 4.0/3.0 x16 slot
Intel® B660 Chipset *
1 x PCIe 3.0 x16 slot (supports x4 mode)
1 x PCIe 3.0 x16 slot (supports x1 mode)

If you install a GPU in PCIe slot 1 (the obvious choice) with x16 lanes, any card (including an LSI controller) will be restricted to x4 lanes in the second slot. It's perfectly OK to run an LSI card with x8 lane capability in a x4 slot, but you lose half the bandwidth.

The third slot is only x1, so if you fit a faster NIC, look for a card with a x1 edge connector. This x1 slot should be fine for 2.5Gb/s and possibly 5Gb/s NICs, but most 10GB/s NICs need PCIe x4 slots to operate at full speed.

I prefer full size ATX (not mini or micro ATX) mobos because they often come with addditional PCIe slots, but they won't fit in the 804 case. I use Fractal Design R5 on some of my server builds. I dispense with a separate GPU card and use the integrated GPU found in some CPUs. That leaves the first x16 lane slot free for a x8 lane LSI HBA card, so I can use the full bandwidth available.

You should be fine with the B660M board, provided you don't go mad with expansion.
 
An excellent idea. I have a large number of PCs and TrueNAS Core ZFS servers fitted with LSI SAS controller cards. They work equally well with SATA and SAS drives, but you can't mix SAS and SATA on the same card. I also use them with an external LTO4 SAS tape drive for backup to 800GB tapes. A very economical and professional way to expand you mobo's drive capability.

Here is an example of a suitable card (9211-8i) complete with a couple of SFF-8087 forward breakout cables. Note this card comes with a low profile bracket in addition to the full height bracket. The -8i means it can handle 8 drives, via two SFF-8087 ports at the front of the card. You can get -4i, -12i and -16i versions for fewer or more drives, but the -12i and -16i tend to be more expensive.

https://www.amazon.com/SVNXINGTII-9211-8i-6Gbps-HBA-LSI/dp/B0BY8YD1JW

71UXMvHqy0L._AC_SL1000_.jpg



One very important thing to note is these cards are available with two different versions of firmware.

1). IR (RAID)
2). IT (Initiator Target)

You'll probably need IT-mode firmware, which passes all commands and reponses transparently between the controller and the drives. In the advert for the card above, they mention IT Mode and ZFS which is the type I use.

ZFS is software RAID and needs to "see" the drives to work properly. I use TrueNAS core. N.B. This OS replaces Window, but it might be worth a look if you want a professional server OS. Core is free.
https://www.truenas.com/truenas-core/

You can buy IR (RAID) cards and re-flash them to IT (Initiator Target), but this can be difficult. It took me two days to re-flash a couple of Dell H200 cards from IR to IT. Easier cards can be re-flashed in 5 minutes. Best best is to buy IT cards, unless you want proprietary hardware RAID.

There are many other suitable cards you could use. I prefer genuine LSI cards (second hand ex-server "pulls") as opposed to Chinese clones sold on AliExpress, Amazon and eBay. I doubt you'll notice any difference in a home server system if you buy a cloned card, purporting to be LSI.
https://forums.servethehome.com/ind...and-hba-complete-listing-plus-oem-models.599/

My other reason for using SAS cards is bandwidth. A 10-drive array of 14TB drives is likely to swamp a cheap and cheerful plug-in SATA controller card, especially if it's only PCIe x1. Admittedly the older LSI cards are running PCIe Gen.2 or Gen.3, but they cope well with large arrays of hard disks in server systems.
This was a very intresting and helpfull answer.
I'll probably go for TrueNAS. I alredy bought the case this november since it was 50% off but I should have done some more research beforehand that's why Im going for Asus prime B660M-A.
Another question, I wanted to buy an Intel so i wouldnt have to buy a GPU, is this one ok just for transcoding;
https://amzn.eu/d/8VNrpxq
That said I would have put the LSI card that you suggested in the PCIe dedicated to the GPU, problem is I live in Italy so the cards this is the only card available here otherwise I have to try my luck on aliexpress:
https://amzn.eu/d/cbtqYCI
Last thing since the HDD are not SAS are they compatible with the LSI HBA?
 
@Ralston18

Yes: if the proposed NAS is going to simply connect to a network then all you need is the motherboard's network card.

Power: that is a bit more involved. Very likely that, as I understand the above HDD specs, you will need a quality PSU that can provide more than 500 watts. Remember the PSU must support all installed components of the proposed build. There are online calculators that can help you size the PSU. Plan for growth.

SAS controller: You mentioned needing up to 10 SATA ports. Will all 10 drives be 14TB HDD's?

What case are you planning to use? Might get a tight with respect to putting all together and then later with respect to airflows and cooling.

Consider starting a bit smaller and simpler. Then expand: 4 drives to 6 drives to 8 drives to 10 drives. Some gradual increase that you can pre-plan and work towards carrying out.

So this is the build i was thinking to go for:

  1. Case: Fractal Design Node 804
  2. Mobo: Asus Prime B660M-A WiFi D4
  3. CPU: Intel Core i3-13100
  4. PSU: be quiet! Pure Power 12 M 650W
  5. RAM: Crucial Pro RAM DDR4 32GB Kit (2x16GB)
  6. HDD: WD 14TB
  7. SSD: WD_BLACK SN770 2TB M.2 2280
  8. Heatsink from noctua: i need to actually see one that il low profile and compatible
  9. LSI HBA as suggested by the user Misgar: SAS 9207-8I (for misterious reasons the 9211 - 8I is 300€, this is the only HBA that is under 100€ and is not RAID configured, I'll see if I can find a trusted seller from Aliexpress and try my luck)
I also have 6 fans from be quiet that i bought by mistake beacuse i didnt notice they were non PVM so the case should be very ventilated.
If there are points i can improve I'm open to suggestions, I can upgrade the PSU to 700W but ill run a test to see how much does this setup consume.
 
Overall, making progress now....

How many initial drives (HDD and SSD)?

As for fans: more fans are not always better. - could make things worse.

Fan placement and airflows, and airflow directions need to be considered.

In any case, the overall concept and the requirements of yourNAS build are coming together.

AgaIn, there may be other ideas and suggestions.

And likely some trade-offs involved.

As for power testing - yes test. Keep in mind that the PSU will need to support periods of peak demands so test accordingly.
 
I'll probably go for TrueNAS.
I suggest you read up about TrueNAS to see if it's suitable for your needs. Although it's a sophisticated storage system, it can take a while to puzzle out how set up your array, vdev and sharing options, if you're unfamiliar with the OS. Luckily there are numerous guides.
https://www.truenas.com/truenas-core/
https://www.wundertech.net/how-to-install-plex-on-truenas/

I run RAID-Z2 which means any two drives can fail and (in theory) your data remains intact. In practice, you may encounter problems if one of the remaining drives contains hidden problems during "resilvering".

With an 8 disk array in RAID-Z2, you end up with 6 disk capacity, i.e. with 8 x 14TB (112TB), you'll only get 6 x 14TB (84TB) for your files. You can also run RAID-Z1 (single drive redundancy) or RAID-Z3 (triple drive redundancy). 10 disks in RAID-Z2 is better, if you have the room in the case.

Please remember that RAID is not a backup. You need copies of all important files saved on a least two other systems or storage media, with one copy off-site if possible.

I wanted to buy an Intel so i wouldnt have to buy a GPU, is this one ok just for transcoding;
TrueNAS Core runs on virtually any AMD or Intel CPU going back to around 2010. You don't need a large SSD either. I have not run any form of transcoding in TrueNAS so I can't answer your question, but the more cores the merrier when I'm rendering video in Windows. Buy what you can afford.

For a basic TrueNAS Core installation you need 8GB RAM and a 16GB SSD.
https://www.truenas.com/docs/core/13.0/gettingstarted/corehardwareguide/

However, TrueNAS Core "loves" RAM and the minimum recommendation is 16GB. Even this might not be enough, when you start running "plugins" and "jails". If you fit 2 x 16GB (32GB), any spare memory will all be swallowed up by the cache in TrueNAS.

There's also the thorny question of ECC versus non-ECC RAM. The purists would argue you must use ECC RAM to reduce the chance of data corruption. I have two HP servers with 60GB and 64GB ECC RAM respectively, plus two desktop PCs with 16GB and 32GB non-ECC. All four systems are running TrueNAS Core.

This might be overkill for TrueNAS. As already mentioned, the minimum SSD spec is only 16GB. In the past I've booted TrueNAS from 16GB Kingston USB flash drives, but have moved over to 32GB mSATA drives in the HP servers. You'll have to research requirements for running Plex, etc, so a 2TB might be OK, but I suspect 500GB will be more than adequate.

I live in Italy so the cards this is the only card available here otherwise I have to try my luck on aliexpress:
That Amazon card is a bit pricey, especially if it's a clone and not a genuine LSI card. Do you have eBay in Italy? I might be inclined to try AliExpress. I've been using them to buy things for the last three months and delivery is usually 7 to 10 days, plus they're cheap. I'm not sure about the Italian postal network with overses deliveries, but it might be worth the risk.

Last thing since the HDD are not SAS are they compatible with the LSI HBA?
Yes, you can run SATA drives on LSI SAS HBA cards. If you do buy SAS drives instead of SATA, you'll need different forward breakout cables. The power and data ports on a SAS drive are "bridged" by a lump of solid plastic. You can plug a SAS dual power/data cable into a SATA drive, but you cannot plug SATA power/data cables in SAS drives. See below.

sas_vs_sata.jpg


I should point out that TrueNAS, unraid and similar OS may not be the best solution for your needs, but with a working LSI IT-mode HBA, you can play around with multiple drives and different software. It's all good fun.

I can upgrade the PSU to 700W but ill run a test to see how much does this setup consume.
Budget on around 10W per hard drive, but remember the motor startup currents can be high, e.g up to 2A each on the +12V rail. Eight drives powering on at the same time could potentially pull 16A at 12V for a short time. Professional multi-disk servers spin up the drives consecutively with a short time delay between each drive. I've not had any power problems with 10 hard disks in a desktop system. My HP servers have dual redundant PSUs and are designed to run up to 24 x 2.5" hard disks.

I'll start with 1 M2 SSD to install the OS and 3 HDD.
You cannot add more disks to a TrueNAS array and retain the data. It's wipe and start again when you add drives. No great problem if you're experimenting, but a pain if the array contains TB of data.

I check all drives with a lengthy Hard Disk Sentinel surface scan in Windows before installation in a TrueNAS array. On an 8TB drive this takes around 10 hours for a simple read scan which checks every sector thoroughly.
https://www.hdsentinel.com/help/en/61_surfacetest.html

With 14TB drives, you're looking at roughly 20 hours per drive for a read test and 40 hours for a write then read test.. Some people "scrub" their disks with long SMART scans for a week, before building an array. It's up to you, but with second hand (ex-server pulls) I like to check thoroughly for disk errors. It's not worth risking important data on untested drives, even if you have other backups.

If you see this result, use the drive as a door stop.

img_65_hddsurface4.gif
 
I suggest you read up about TrueNAS to see if it's suitable for your needs. Although it's a sophisticated storage system, it can take a while to puzzle out how set up your array, vdev and sharing options, if you're unfamiliar with the OS. Luckily there are numerous guides.
https://www.truenas.com/truenas-core/
https://www.wundertech.net/how-to-install-plex-on-truenas/

You cannot add more disks to a TrueNAS array and retain the data. It's wipe and start again when you add drives. No great problem if you're experimenting, but a pain if the array contains TB of data.

I check all drives with a lengthy Hard Disk Sentinel surface scan in Windows before installation in a TrueNAS array. On an 8TB drive this takes around 10 hours for a simple read scan which checks every sector thoroughly.
https://www.hdsentinel.com/help/en/61_surfacetest.html

With 14TB drives, you're looking at roughly 20 hours per drive for a read test and 40 hours for a write then read test.. Some people "scrub" their disks with long SMART scans for a week, before building an array. It's up to you, but with second hand (ex-server pulls) I like to check thoroughly for disk errors. It's not worth risking important data on untested drives, even if you have other backups.

I'll read all the documentation about truenas and see if it's suitable for me, I'll also check UnRaid to see if its more suitable and to check if it's ok to add more HDD to the array, but I dont really like the idea of the OS always booting from a USB.
Also wouldn't it be better to start this as windows PC, yes I know it wasnt the plan but, if I start this server lets say with an M.2 SSD of 1TB and install there the Windows, add a maximum of 2 HDD so 28TB in total with only TV series and movies I can afford to lose, then when I actually have the necessary money to afford all the HDD 8*180 =1440€, for a total of 10HDD, only then ill transition to TrueNAS and make the final array. I belive i could afford all the HDD in a 6-8 month period whithout hurting the bank.

Please remember that RAID is not a backup. You need copies of all important files saved on a least two other systems or storage media, with one copy off-site if possible.
Ill take this in consideration, Ill make some backups.
For a basic TrueNAS Core installation you need 8GB RAM and a 16GB SSD.
https://www.truenas.com/docs/core/13.0/gettingstarted/corehardwareguide/

However, TrueNAS Core "loves" RAM and the minimum recommendation is 16GB. Even this might not be enough, when you start running "plugins" and "jails". If you fit 2 x 16GB (32GB), any spare memory will all be swallowed up by the cache in TrueNAS.

There's also the thorny question of ECC versus non-ECC RAM. The purists would argue you must use ECC RAM to reduce the chance of data corruption. I have two HP servers with 60GB and 64GB ECC RAM respectively, plus two desktop PCs with 16GB and 32GB non-ECC. All four systems are running TrueNAS Core.


This might be overkill for TrueNAS. As already mentioned, the minimum SSD spec is only 16GB. In the past I've booted TrueNAS from 16GB Kingston USB flash drives, but have moved over to 32GB mSATA drives in the HP servers. You'll have to research requirements for running Plex, etc, so a 2TB might be OK, but I suspect 500GB will be more than adequate.
Ill settle for a 1TB SSD and at the moment I can affort 64GB of ram since it's only 100€ and a 1 time purchase.

That Amazon card is a bit pricey, especially if it's a clone and not a genuine LSI card. Do you have eBay in Italy? I might be inclined to try AliExpress. I've been using them to buy things for the last three months and delivery is usually 7 to 10 days, plus they're cheap. I'm not sure about the Italian postal network with overses deliveries, but it might be worth the risk.
Ill see if I can find a trusted seller on Ebay with some decent prices, otherwise I seem to have found a trusted seller on Aliexpress wich has storage in Germany, so the import fee wont be as high as importing from USA.
Obviously I will do some more reasearch before pulling the trigger on this.

Yes, you can run SATA drives on LSI SAS HBA cards. If you do buy SAS drives instead of SATA, you'll need different forward breakout cables. The power and data ports on a SAS drive are "bridged" by a lump of solid plastic. You can plug a SAS dual power/data cable into a SATA drive, but you cannot plug SATA power/data cables in SAS drives. See below.

sas_vs_sata.jpg


I should point out that TrueNAS, unraid and similar OS may not be the best solution for your needs, but with a working LSI IT-mode HBA, you can play around with multiple drives and different software. It's all good fun.
This is good information as I didnt know about it, thank you ill bear it in mind when ordering the cables.

Budget on around 10W per hard drive, but remember the motor startup currents can be high, e.g up to 2A each on the +12V rail. Eight drives powering on at the same time could potentially pull 16A at 12V for a short time. Professional multi-disk servers spin up the drives consecutively with a short time delay between each drive. I've not had any power problems with 10 hard disks in a desktop system. My HP servers have dual redundant PSUs and are designed to run up to 24 x 2.5" hard disks.
So I made an estimate with all the part minus the HDD on pcpartpicker and it says 240W,
10 HDD × 2A × 12V = 240W
for a total of 480, Ill might settle for a 600W or 650 to really be sure, also it's just 20€ more then the 500W gold.

As always thanks for the advice.
 
@Ralston18



So this is the build i was thinking to go for:

  1. Case: Fractal Design Node 804
  2. Mobo: Asus Prime B660M-A WiFi D4
  3. CPU: Intel Core i3-13100
  4. PSU: be quiet! Pure Power 12 M 650W
  5. RAM: Crucial Pro RAM DDR4 32GB Kit (2x16GB)
  6. HDD: WD 14TB
  7. SSD: WD_BLACK SN770 2TB M.2 2280
  8. Heatsink from noctua: i need to actually see one that il low profile and compatible
  9. LSI HBA as suggested by the user Misgar: SAS 9207-8I (for misterious reasons the 9211 - 8I is 300€, this is the only HBA that is under 100€ and is not RAID configured, I'll see if I can find a trusted seller from Aliexpress and try my luck)
I also have 6 fans from be quiet that i bought by mistake beacuse i didnt notice they were non PVM so the case should be very ventilated.
If there are points i can improve I'm open to suggestions, I can upgrade the PSU to 700W but ill run a test to see how much does this setup consume.
Look on Ebay for a LSI 9207-8i they run around $25-40 for IT mode cards. you can then pick up cables cheap off amazon SFF-8087 to sata


Other then my lsi megaraid 16i card, ive been able to pick up an LSI- 8i card and cables for under $100
 
Check number of available sata power connectors on your chosen PSU model.
So you don't need to use additional cable adapters.
Thanks for the advice. So I controlled the PSU that I linked before and it has 3 sata power connectors, so that amounts to 3 x 3 = 9 HDD.
I found this "MSI MAG A650GL Alimentatore 650W, 80 Plus Gold", it has 4 sata power connectors but only 2 x cables, I recently (6 months ago) bought this PSU for my old pc so i actually have the other 2 cables, I'll check to see if they are 100% compatible.
 
Look on Ebay for a LSI 9207-8i
Yes, I'd forgotten the 9207 is PCIe Gen.3 and hence faster than the older 9211 which I believe is PCIe Gen.2.

Just a note of caution, regardless which HBA you buy. The heatsink on the LSI chipset can get exceedingly hot because these cards are designed for use in servers with good airflow. The 9207 will run 2 or 3W hotter than the 9211. I fix a small 40mm, 50mm or 60mm fan to the heatsink to keep it cool.
https://www.thessdreview.com/our-re...7-8i-pcie-3-0-host-bus-adapter-quick-preview/
 
I dont really like the idea of the OS always booting from a USB.
When I started using FreeNAS (the precursor to TrueNAS) back in 2018, it was still common practice to boot from 8GB USB memory sticks. My HP servers have a single internal USB port on the motherboard and I used fast Kingston USB flash memory. I did not want to use one of the valuable SAS ports for the TrueNAS boot drive, hence booting from USB.

More recently things have changed and you are now advised not to boot TrueNAS from USB flash drives, which is why I switched to 32GB mSATA SSDs installed in small mSATA to USB converter boards. They still plug into the internal USB ports in the HP servers, but no longer have the same reduced number of writes as standard USB memory sticks. SSD is now the recommended boot drive for TrueNAS.

Also wouldn't it be better to start this as windows PC
Yes. One of my desktop systems boots to Windows Server 2008 R2 from one IDE hard disk and TrueNAS Core from another IDE hard disk. For some reason I couldn't get any SATA drives to work on the motherboard SATA ports, so switched to older IDE/PATA. As usual there's an LSI SAS controller for the SATA array used with TrueNAS.

The Windows server boot option was just for running long SMART tests on all 8 drives, before commiting to them to TrueNAS. In theory, I could add more drives and use them with Windows Server, but with 10 hard disks installed, this case is now full. I have a Lian Li V2000 case which will currently accept 16 x 3.5" drives, but with room for 4 more if necessary. My next build perhaps?

HDD 8*180 =1440€, for a total of 10HDD
Some people advise against buying all your drives at the same time from the same supplier. The reason is they'll probably all be from the same batch with the same characteristics and potential weaknesses. As a result, in a few years time, when one drive starts to fail, the others may not be far behind. This can cause problems during resilvering if more drives reveal hidden data corruption or failure modes. You can guard against "bit rot" by running regular scrubs, but backups elsewhere are essential.

N.B. This is a batch related problem. You should still buy the same drive model for all disks in the array, but from different suppliers or in several smaller orders, spaced out in time, to avoid getting all your drives from one batch. That way you're (hopefully) reducing the risk of buying a complete set of drives with bad components or faulty assembly.

I had problems with a 6TB Toshiba N300 NAS drive going bad after 6 years in an array of 6 identical drives, all bought at the same time. The really annoying thing is the drives had been running for a total period of only 6 days! I switch on this server several times each year to make backups, hence the low usage.

I can affort 64GB of ram since it's only 100€
An excellent idea. I recommend 2 x 32GB, not 4 x 16GB. If you want to run XMP, 2 DIMMs will probably clock faster than 4 DIMMs.

I seem to have found a trusted seller on Aliexpress wich has storage in Germany, so the import fee wont be as high as importing from USA.
I'm not based in the States which seems to be much cheaper for computer gear. AliExpress add 20% VAT on everything I buy from China, but I can escape Import Duty if I keep each order down below the equivalent of approx. 170 Euros. I made this mistake on eBay last year on a big order from China and got hit by an extra 60 Euro import duty charge.

So I made an estimate with all the part minus the HDD on pcpartpicker and it says 240W,
10 HDD × 2A × 12V = 240W
for a total of 480, Ill might settle for a 600W or 650 to really be sure, also it's just 20€ more then the 500W gold.
After the initial surge, you should find your system drops down to below 200W. My HP servers with two (dual redundant) 450W PSUs in each chassis and 6 or 8 drives consume roughly 110W when idle and 140W when the drives are busy reading/writing data. Other heavy duty processing tasks would add to the power figures.

I've used Corsair 750W and 850W RM-series PSUs in some builds, simply because they have plenty of SATA power connections and being modular, you can ditch the unwanted PCIe GPU power leads and fit more SATA leads. I don't need all that power, but Corsair PSUs are easier to obtain where I live.
 
Last edited:
I'll check to see if they are 100% compatible.
I spend a lot of time with a multimeter (set to Ohms) and a known good cable from the PSU, when buying more Corsair power cables on eBay or Amazon. After double, then triple checking they're identical, pin for pin, I connect the new SATA lead to an inexpensive ATX power tester (top connection in photo below) and switch on.

https://www.amazon.de/-/en/APKLVSR-Supply-Computer-Connections-Digital/dp/B0CCRQHXLW

71dSSRexvQL._SL1500_.jpg


Even then, if I have any doubts, I connect an old sacrificial hard disk to the new SATA lead and test again. If i've made a stupid mistake, the old drive dies, but so what? Hopefully, the PSU won't die too, but I'll avoid destroying a really expensive drive.