A new report claims that AMD is preparing Smart Access Storage (SAS) to accelerate storage performance on its Ryzen processors.
AMD Preps Smart Access Storage To Accelerate SSD Performance : Read more
I always have to laugh when someone mentions DirectStorage...
One of our favorite games in the family is ARK Survival Evolved. It has lots of add-ons and maps available and has grown from just around 100GB initially to something like 400GB currently.
But that's not the main issue: those 400GB are 150,000 files in 13,000 folders and evidently launching the game and a session needs to read/parse quite a few of them and perhaps rather randomly.
Since it's so rather big, I put it on a Windows Server 2019 share with a RAID0 of SATA SSDs as a backend and felt confident that the 10Gbit network between the gaming workstations should be enough to carry the SSD performance over the network, especially since I tested this with a couple of VM images with dozens of GB each and reached the expected 1 GByte/s across the wire.
But even if the network is fundamentally able to deliver the bandwidth, loading the game via the network still took ages longer than from a single local SATA SSD, which can also easily take some minutes. It was much slower than the local HDD loading that I wanted to replace with a shared pool instead of buying SSD storage for each PC. Could the SMB network add such drastic overhead?
During a Linux Steam test session I didn't have an SSD available for storage, so instead I used a rather ancient 2TB HDD I had lying around. So when I launched the game under Linux, I didn't really expect that to come alive in less than 15 minutes or so: I wanted to have a look at the graphics.
But in fact it loaded way faster than from the local SSD JBOD on Windows!
Again, just in case you missed it:
a lowly HDD on Linux beat a RAID0 of SSDs on Windows 2019 Data Center Edition!
Unfortunately I didn't find the time to run the test with Linux as a file share host with Windows clients: that should have been really interesting!
Opening something like 100k files to start a game may be somewhat extreme. Lots of games store maps in large files and perform much better. ARK evidently was designed from an EPIC template that involved a lot of small files even in the original Shooter Game, but grew "epic" with the wonderfully detailed and large maps they designed.
Whatever Microsoft does when opening a file, the overhead adds up big time when you deal with hundreds of thousands. I don't know if they do virus scanning/blacklist-checking or have a really inefficient way of parsing the file system tree. But when you compare that with how Linux performs, the difference is orders of magnitude and I am shocked to see that a VMS successor could perform that badly.
Gaming doesn't need a new storage API on Windows: it needs a different OS that is actually able to use the hardware that's already there efficiently. And somebody better tune file sharing, as that brings far worse performance than a local hard disk even with an SSD backend and a 10Gbit network, when lots of small files are involved: there must be dozens of synchronous latency intensive dialog packets, before the first file data actually goes across the wire. Large files easily saturate the 10Gbit network, small files bring it to a crawl even when you copy RAM cache to NVMe.
I can only recommend Microsoft engineers repeat this easy (and fun!) benchmark using ARK and start profiling their code!
Too bad the "looks" of ARK under the Linux variant of Steam just aren't nearly as good, otherwise we'd have done the switch, just to cut down on those terrible load times.
BTW: Restarts of ARK with a warmed up file system cache do much, much better (locally, not network): Perhaps Windows remembers it's already virus scanned those files or it just traverses file system trees in RAM much faster than those on disk.