What's the difference between Raid 0 and Raid 1
(1TB x 2)
Raid 0 is disk striping.
Raid 1 is disk mirroring
Raid 5 (if supported) is disk striping with parity.
Raid 0: Breaks your data into equal parts across any number of drives (2 or more) Idea here is the more drives the faster it goes. (5 drives in raid 0, means your writing (and reading) from 5 drives at the same time, with a slight overhead) Problem. Because it writes the data evenly across all of the drives. If one drive fails. You lose your data on everything on the raid array. So increasing number of disks increases speed, also increases failure rate. Theoretically when a SSD fails from too many writes most will go into a permanent read only mode, so you could pull your data off without losing any. Raid 0 is for speed and more speed. I wouldn't keep family photos without backups on a raid 0 array.
Raid 1: Disk mirroring. (2 drives) What you do to one drive, you do to the other. Problem. You lose half your drives (and space). So if you used this setup on two 1TB M.2 NVME SSD's, you would only have 1 drive accessible and 1 TB of space (Minus obvious partitioning space loss)
Raid 5: (3+ Drives) I don't see much support for this anymore. It's essentially Raid 0, but uses one drive for a parity calculation.
Data = D
Parity = P
Disk 1 | Disk 2 | Disk 3
1| D | P | D
2| D | D | P
3| P | D | D
4| D | P | D
Essentially if a single drive fails in the array, you can rebuild (regenerate) the missing disk by replacing the dead drive with a new drive of equal or greater size and making it regenerate the missing data ..
If Disk 3 for example failed .. on the 1st row (data stripe) you lose the data from Disk 3. but with the Data on Disk 1, and the parity mathematical calculations that remain from Disk 2 that were ran on the two data chunks for that stripe. It can essentially 'triangulate' with the data and the math calculations left on Disk 1 and 2, to recreate the data for Disk 3.
On stripe #2 disk 1 and disk 2 carry all the data, so no data is lost just the parity calculation which is discarded (and regenerated from the intact data for the stripe when regenerating the lost volume)
As you can see the data and parity calculations on the data are staggered across all of the drives so this is possible. It's not quite as fast as raid 0 as there is some latency and overhead when writing to the disks due to some calculations being ran, but aside from some chipset processor usage (minimal) it's not noticeable unless measured by benchmarks, but there is a slight difference. You also (in a 3 disk Raid5 Array) lose 1/3 of your total space to parity calculations. A good compromise between Raid0 and Raid1. But it's only fault tolerant for a single drive failure.. You can use more drives for more speed aswell.
For whatever reason most of the new mainboards don't support Raid5 anymore, raid0 and raid1 are pretty popular.
TLDR: That's what the typical non enterprise raids are (there are tons more) I would personally Raid 0 the drives and purchase some 2tb - 4tb sata SSD or a Sata Harddrive for mass storage depending on budget.
Alternatively purchase one 2tb SSD drive and one cheap mass storage sata SSD or HDD and just don't worry about raid. The drive endurance (and speed in some cases) are alot higher on the higher capacity SSD's which is a real concern if you are a power user and/or don't want to keep replacing drives.