Imagine the rebuild time for a 10 terabyte drive ...
No one who wants to store 20 terabytes of data is going to use raid 0.
20 terabytes of data (10 terabytes effective) with just 2 drives in a raid 1 is also highly insecure due to the risk of a drive failing during your 37 hours best case rebuild when a drive fails
20 terabytes / 150 megabytes a second = 133,333 seconds / 60 = 2222 minutes / 60 = 37 hours
This feels like a accident waiting to happen.
A better, more safe way to get 20 terabytes would be 7 - 4 terabyte drives in a raid 6, or better yet zraid2 for the zfs inclined.
With ZFS, I would use 2 striped zraid1s consisting of 3 - 4 terabyte drives in each array, an effective raid 50, for an effective 16 terabytes.
You could add a 3rd zraid1 if you really needed over 20 terabytes or add a mirrored vdev of 2 - 4 terabyte drives for exactly 20 terabytes
Of course using 7 or even 9 drives to reach your goal is much more expensive than buying 2 - 10 terabyte drives, but that's the price you pay when you want 20 terabytes to actually be readable in 2 years.