Yes, we know all that, and the rest. I said there was no difference. between SSD partitions.
Partitions are contiguous storage spaces.
The SSD's options are:
- spread data throughout the full address space, and go through various hoops to present artificial partition data, information and limitations when requested.
- divide the full address space into partitions and restrict data to the resulting partitions accordingly.
You think the first is the logical one, I think the second.
So far as I know, and so far as your tests show, the second is what what they do.
The third option is what I think most do, if they segment pSLC at all (because not all do): segment pSLC cache-capable areas into sections. That has nothing to do with the partition allocations, as I didn't do testing any further than 1 versus 2 partitions. If it were broken up into odd numbers like 3 or 7 it might make it look even weirded, like with partitions 1, 4 and 7 having available pSLC and the others not. Testing that on multiple drives could take a year.
I KNOW that the drive controllers "go through various hoops" because the translation from LBA addressing to physical space has been happening since LBA was introduced. That's why it's LOGICAL block addressing. (I'm not sure when sector reallocation actually became a thing.) And SSDs have had to do it much more heavily due to the use of multiple channels and particularly since wear leveling was introduced. That data MUST be tracked constantly and updated every time the wear leveling algorithm moves data from one block to another, which only happened due to bad block detection on HDDs.
There is no need for the controller to be "aware" of the locations of the partition because the partition table has that data, and the OS knows where the data it needs is located using the file table and partition table. The drive tells the OS "I have LBA addresses 0 to 999,999,999 million available" and the OS knows that means it has 512MB of space (with 512-byte sectors; and ignoring binary/decimal marketing where it would have less than 1 billion). You tell the OS to make two partitions and it records in the partition table that partition 1 is 0 to 499,999,999, and that to 999,999,999 is partition 2.
That's just a cosmetic notation for the user and applications, really. The drive has no need to know at all (and doesn't care what the partition table stored on it says) and the OS itself don't really care (*NIX treats partitions purely as part of the folder structure as far as a user is concerned, unlike Windows "drive" letters that make it look like another disk), because the OS still has to track the full billion LBA addresses for the entire drive in the file table and the controller obviously still has to know where those addresses are physically. Many of the purposes of partitioning have been largely lost; even Linux just uses one big partition these days for what used to be separated into several, and issues with partition size limits have been eliminated.
Whether mechanical or SSD, you ask for File X, and the OS still has to know that its start address is 612,983,486 (I'm just assuming 512-byte allocation units for ease so I don't have to calculate 4K chunk locations) and ask the drive for that address's data plus however many complete the file (including possibly jumping to other remote addresses). It doesn't ask the controller to first move to partition 2 then look for 112,983,487 within that partition. Controllers don't even really know where file data is or isn't located; on an SSD it just knows where data ISN'T (i.e., an erased block, or a block that is available for erasure after TRIM, or the garbage collection algorithm which isn't the same as knowing the file system data).
If the controller were aware of partition tables it would need to be made aware of every possible partition style. Partition tables are stored data, and the controller is data-agnostic. You can write anything you want to the drive, even if it's not a partition table, or make up your own new partition scheme that nobody else understands. It can be raw data that only your application can read and everyone else thinks is random with no partitions. MBR and GPT are the current common ones but others have existed and could be used on the drives, and the manufacturers would not be expected to make their controllers aware of them all. If you could make an old Macintosh II capable of seeing an NVMe drive you could use the Apple Partition Map and it would work exactly the same (up to 2TB).