Question Can you mirror a M2 drive with a SATA drive?

Apr 21, 2025
7
1
15
Im using a Optiplex USFF and would like to install new SSDs and create redundancy. This machine can accept a M2 drive and a Sata Drive. Is it possible to set up a RAID 1 configuration so that if one drive fails the other immediately takes over?
Edit: TIME is my main concern. Downtime is a killer for me, AND I need a portable device. Hence the reason for the USFF and trying to use the ports native to this device(M2 & Sata) to eliminate or lessen downtime as much as possible.
 
Last edited:
Welcome to the forums, newcomer!

Im using a Optiplex USFF
You might want to clarify which OptiPlex you own as there a number of them in Dell's portfolio, both new and old.

Off the top of my head, the answer is, no. it might be possible with 2.5"/3.5" SATA drives.
 
  • Like
Reactions: gunrs17
I havent purchased yet but im looking at something like this:
DELL OPTIPLEX 5060 USFF i5-8500 3.0GHZ. Definitely an older machine.

I appreciate your input. I'm honestly not sure what size drives fit in here, but I do know it can take 1 SATA drive and 1 M2. Im not positive, but I think with the USFF form factor this is true throughout the different Optiplex models
 
Yes, you can mirror between different types of drives, depending on some things. If you use Windows or Linux software RAID, it doesn't care at all what type of drives you use as long as they're internal. You could RAID an old SCSI drive with an NVME and an IDE drive if you wanted. (With workarounds you could even include USB.)

If you want to use the system chipset RAID (Intel or AMD), then it depends on the chipset. Intel chipsets don't seem to allow using RAID between different types, but the last several generations of AMD chipsets do allow it. Many new mini-PCs have that as a selling point. (Chipset RAID is "fakeRAID", where it's actually drivers and the CPU doing the work in software, so it's really an arbitrary limitation on Intel's part that doesn't allow mixing drives. However that does ensure that people don't try it and then complain to Intel about poor performance, and makes Intel's support easier and cheaper for them.)

A true hardware RAID will only work with the same drive types. A SAS controller will let you connect both SAS and SATA, but they can't be mixed in a RAID, and of course there aren't any cards that mix NVME drives with others.

However keep in mind that performance will likely be constrained somewhat by the slower drive being used. The OS will cache data that is pending writes, and at some point the NVME drive will just be waiting for the SATA drive to finish writing, if you have them in RAID1 mirroring, but the OS will still be holding that data in the cache for the SATA, or the app will be held up waiting for the write to complete. The OS limits how much it will cache before telling the app "hold on, waiting for the drive to finish writing". This could result in significant speed loss while writing large amounts of data. There could also be an increase in the risk of the drives being out of sync if power was lost during a write, since the NVME drive could have completed the write before the SATA.

When doing reads from a RAID1, there likely would not be a performance drop, as the OS will just be able to read however much data it can from each drive. While it's waiting for the SATA drive, it can keep reading more blocks from the NVME. It just wouldn't be much faster than using an NVME alone, usually.

By definition, RAID1 has both drives constantly active. One doesn't have to "take over" if the other fails. If the SATA drive failed, your performance wouldn't change much. If the NVME drive failed, you'd suddenly see much worse performance since the max read and write would be SATA-based.
 
Yes, you can mirror between different types of drives, depending on some things. If you use Windows or Linux software RAID, it doesn't care at all what type of drives you use as long as they're internal. You could RAID an old SCSI drive with an NVME and an IDE drive if you wanted. (With workarounds you could even include USB.)

If you want to use the system chipset RAID (Intel or AMD), then it depends on the chipset. Intel chipsets don't seem to allow using RAID between different types, but the last several generations of AMD chipsets do allow it. Many new mini-PCs have that as a selling point. (Chipset RAID is "fakeRAID", where it's actually drivers and the CPU doing the work in software, so it's really an arbitrary limitation on Intel's part that doesn't allow mixing drives. However that does ensure that people don't try it and then complain to Intel about poor performance, and makes Intel's support easier and cheaper for them.)

A true hardware RAID will only work with the same drive types. A SAS controller will let you connect both SAS and SATA, but they can't be mixed in a RAID, and of course there aren't any cards that mix NVME drives with others.

However keep in mind that performance will likely be constrained somewhat by the slower drive being used. The OS will cache data that is pending writes, and at some point the NVME drive will just be waiting for the SATA drive to finish writing, if you have them in RAID1 mirroring, but the OS will still be holding that data in the cache for the SATA, or the app will be held up waiting for the write to complete. The OS limits how much it will cache before telling the app "hold on, waiting for the drive to finish writing". This could result in significant speed loss while writing large amounts of data. There could also be an increase in the risk of the drives being out of sync if power was lost during a write, since the NVME drive could have completed the write before the SATA.

When doing reads from a RAID1, there likely would not be a performance drop, as the OS will just be able to read however much data it can from each drive. While it's waiting for the SATA drive, it can keep reading more blocks from the NVME. It just wouldn't be much faster than using an NVME alone, usually.

By definition, RAID1 has both drives constantly active. One doesn't have to "take over" if the other fails. If the SATA drive failed, your performance wouldn't change much. If the NVME drive failed, you'd suddenly see much worse performance since the max read and write would be SATA-based.
Amazing breakdown..Much appreciated! Im very new to learning RAID. I'm really looking at USFF Optiplex because of the size and its durability. Im looking at older generations 5th-8th because of price point (I don't want to be over $200 USD). That being said I believe most of these machines in the above parameters will run Intel chips. I don't really know what it means to 'use Windows or Linux software RAID' VS 'If you want to use the system chipset RAID (Intel or AMD)'
 
  • Like
Reactions: artk2219
Can you?
Yes.

Should you?
Absolutely not.

RAID 1 is not just having a drive on the side, ready to take over if the primary dies.
Both drives are 100% in play, at the same time.
The array performance will be that of the slowest drive, the SATA III. So you're gimping your NVMe.

And a RAID 1 is only good for continued uptime, in the event of physical drive fail. It does nothing to protect the actual data. For that, you need a real backup.
And if you have that backup and can survive a whole hour of rebuild time...you don't need the RAID 1.


Bottom line - Do not do this.
 
Raid1 improves your recovery time for a drive failure.
RAID1 doesn't improve recovery time, it entirely eliminates the need to recover from a drive failure at all because your system never actually goes down. It saves your butt if downtime costs you money, and makes it so you don't have to restore OS/data from a backup. Nobody is making claims that it protects you from all the other possible causes of data loss.

Should you?
Absolutely not.
While it's not an ideal situation performance- or value-wise, that doesn't mean NOBODY would be better off using mixed RAID rather than nothing at all. And depending on the system capabilities, an M.2 SATA drive could be used resulting in both drives performing equally, or the cheapest, slowest older-model NVMe drive available could be used so that the performance difference isn't so big and not as much money being spent on unused performance.

Personally, no, I would much rather get a system that is capable of holding matched drives, even if they were both SATA, but if that meant paying much more (including considering the costs for the drives) or having to get a form factor that didn't work for them (as OP seems to want the small size), then the value might not be there for some people.

Im looking at older generations 5th-8th because of price point (I don't want to be over $200 USD). That being said I believe most of these machines in the above parameters will run Intel chips. I don't really know what it means to 'use Windows or Linux software RAID' VS 'If you want to use the system chipset RAID (Intel or AMD)'
Keep in mind the requirements for Windows 11 if you're using Windows and don't want to use Windows 10 after support goes away. Microsoft has already changed the code once where CPUs that could run older builds can't run the most recent builds at all even with workarounds because those older CPUs don't support the necessary instructions, and they could change it again later. They already are making half-hearted changes regularly to prevent installing it on unsupported hardware using workarounds, and could eventually go so far as to hard-code blocks.

Windows includes the capability to configure RAID arrays via Disk Management, using built-in RAID software, where the OS and the CPU do all the work of managing the array. (Linux and others include similar functionality.) You see all the drives in Disk Management and select the ones to include in an array and configure it. The OS copies the data to both drives in RAID1, splits the data up to stripe it across drives in RAID0, or does the striping and parity calculations for RAID5. That requires some amount of CPU time to do the work (a lot of CPU time for RAID5), and with older and slower processors with fewer cores, it could cause a noticeable performance impact. In ages past, you'd only use software RAID if you were running a multi-processor server and absolutely couldn't afford a hardware RAID card (they were very expensive) and were okay with slower performance. But an 8th-gen Intel with 4 to 6 cores like you're considering won't have an issue with RAID1. Even a 5th-gen with 4 real cores should be fine with just 2 drives, although it might start to lag if you were running an array with a lot of drives.

Chipset RAID like Intel's or AMD's, or cheap add-in controllers from Marvell and others, is called HostRAID, often called FakeRAID because the manufacturers call it hardware RAID but it's not really. The chipset doesn't perform the actual processing of the RAID data. Instead it uses software drivers in the OS to perform the same functions that OS software RAID does. The drivers and the controller hide the individual physical drives from the OS, and only present a single drive in device manager and disk management. Performance is largely similar to software RAID, since the CPU is doing the calculations via the drivers. The biggest difference is that with HostRAID the array configuration is stored in the chipset, so you could switch to a different OS without needing to rebuild the RAID (of course if you're wiping the OS, rebuilding the RAID isn't such a big deal unless you have data stored on a separate partition from the OS).

The nice thing about Windows software RAID is that you can plug the drives into ANY Windows machine and the RAID will be recognized immediately, with no need to install any manufacturer drivers. If your motherboard fails, you can just replace it with a completely different model and boot it up, rather than needing to make sure you have the same chipset, or you can move them to a completely different computer build as an upgrade. (Though this could run into the same issues as moving a single drive to a new computer/motherboard, in having cruft leftover like drivers from the old machine, but Windows is pretty good about this these days.) With RAID1, you can even take just one of the drives to plug into another machine and it will be read without issue, although Disk Management will show there is a broken RAID array with a missing drive, again with no need to worry about having the same chipset. (If you put that drive back into the original machine, you would have to tell Disk Management to re-sync the drives.)
 
While it's not an ideal situation performance- or value-wise, that doesn't mean NOBODY would be better off using mixed RAID rather than nothing at all.
In the consumer realm, a RAID 0 does little.
You still need a real backup situation.

And given differing drives, the array performs at the speed of the slowest.

So...Why?
What specific benefit does it bring to the table?
 
Love the interaction here and appreciate everyone's input. TIME is my main concern. Downtime is a killer for me, AND I need a portable device. Hence the reason for the USFF and trying to use the ports native to this device(M2 & Sata) to eliminate or lessen downtime as much as possible. I probably needed to state that in my OP. Also, for the operating system, this will be linux.
 
Consider a backup from a good full drive Image.
An hour?
Why are you insistent on putting your values onto OP? YOU think that's okay, but he doesn't seem to. Most people here use the "your drive could die at any time" argument for why they should have backups, but here you're saying RAID1 wouldn't have any value because he's only going to have one drive failure every 10 years. So should he be worried about the risk of drive failure or not?
Hence the reason for the USFF and trying to use the ports native to this device(M2 & Sata)
For clarity, M.2 is a physical format that supports both NVMe and SATA protocols, if the chipset and wiring support it. Can be confusing since SATA is also used to describe the connector type for SATA protocol drives. As an example the 5060 Micro (USFF isn't a thing, just Micro and SFF) has an M.2 slot that supports both NVMe and SATA protocol drives. So you could use SATA for both drives. Depending on brand, a SATA M.2 drive can be a little cheaper than an NVMe model of the same capacity (like $20), but for some brands they're MORE expensive. But if you're going to be limiting performance overall to what the 2.5 inch SATA drive can do, you may as well save that $20 and get a SATA M.2 drive.
 
Why are you insistent on putting your values onto OP? YOU think that's okay, but he doesn't seem to.
Just trying to clear up some possible misconceptions. Which, regarding "RAID" is very common.

NVMe SSD + SATA III SSD + RAID 1 runs at the speed of the slowest drive in the array.
The SATA III.

Why gimp the NVMe drive?


As said initially...can it be done?
Sure.
Should it be done?
No.
 
Should it be done?
No.
Unless ones particular needs make other considerations more important than whether or not the NVMe drive's performance capability is wasted. If SATA speed is good enough for the user and they aren't paying a premium for the NVMe drive, then the solution is effective for that user.

As I pointed out, it's possible also that READ performance will still be quite high, even higher in some cases (at least measurable in precise tests), because the OS or controller can read different blocks from the two drives simultaneously, loading different parts of the same file from each drive. It doesn't necessarily have to be a one to one ratio, so with RAID1 if the OS is still waiting for a block of a file to be read from the SATA drive it could read 4 blocks from the NVMe during that time, therefore getting the full 5 blocks of that file in a tiny bit less time than it would have taken for the NVMe drive alone to read the 5 blocks. E.g., If it takes the SATA drive 1 second to read a block, and the NVMe drive takes 0.25 seconds to read one block, then the total read would take 1 second. When using only the NVMe, it would have taken 1.25 seconds. (Obviously that would be highly dependent on the real speed of each drive, the size of the file, things like that. It almost certainly wouldn't be consistent enough to make it a factor in choosing to set up RAID1 this way, but it eliminates at least one argument against it.)

Actual write performance will be reduced to the speed of the SATA drive, but in many cases it wouldn't be noticed. The OS's own caching mechanisms would allow the system/application to become responsive again before the write is finished with smaller amounts of data, so it wouldn't matter that the SATA drive is still physically writing data that the NVMe drive already completed (and if a read was needed right after that, the NVMe drive could serve it while the SATA drive finished the write). So at the least, writes won't ever be worse than the SATA drive, and often won't be any different from an NVMe drive because it's not dependent on the actual drive used at all, just the RAM cache. (And Windows can potentially use a pretty huge amount of RAM as disk cache.)

Yet again I feel like buying a cheap machine to test this out on.