SCSI is dead, unless you mean iSCSI or SAS (Serial-Attached SCSI). I never see/hear people referring to SAS as "SCSI", though.
I call BS on this.
Servers don't use M.2, generally speaking. They use U.2 and increasingly E1 form factors (E1.S and E1.L):
A deep dive into the emerging E1.S & E1.L form factors which are transitioning into mainstream data centre SSD form factors.
www.simms.co.uk
I'm seeing some adoption of E3.S, as well.
I'm not sure if it's common to have a partially-populated U.2 port, but you certainly could reduce the lane count.
Haven't you ever heard of command queuing? Normally, the OS sends multiple, overlapping requests to the drive, and its controller firmware works out the best order to perform them, according to the optimal seek pattern for the head. It's hard for the OS to do, since it doesn't know the platter orientation, nor the seek/access times for moving the head different distances.
Dude, the server industry has long ago embraced NVMe for SSDs. Don't you think they would've figured out NVMe hot swap, by now?
(dated 2019, I might point out)
Well, as implied by your subsequent comment, NVMe
does support mechanical HDDs.
One argument for this is quite clear: reduce hardware cost & complexity. By changing the drive interface to NVMe, you get rid of SATA & SAS controllers from server boards/SoCs. Also, software support for the drivers is no longer needed.
If you consider an AMD EPYC server has 128+ PCIe lanes, you easily have more than enough lanes to connect however many HDDs you might want to stuff into a chassis.
This is not true. If you look at the SMART stats, there are ample pre-failure warning signs.
This is not necessarily true. RAID controllers tend to be paranoid and will drop a disk from the array if ever encounters a failed read, but that doesn't mean the drive is "toast".