My testing of optane memory 32 GB in RAID 0.

germanium

Distinguished
Jul 8, 2006
93
9
18,565
This is to provide some information not always revealed in tests provided by review sites.

I bought 3-32GB optane memory modules & installed them on my Asus Prime Z270-A. The motherboard only has 2 m.2 sockets so I bought a PCI Express 3.0 x 4 to m.2 adapter & installed the 3rd module in the last PCI Express x 16 slot which is actually only 4x and connect to the chipset instead of the cpu. This is important as you cannot use the cpu connected ones to create a bootable RAID configuration on consumer class non high end motherboards. That is only available on Xeon & maybe extreme series core I9 processors for high end workstations.

One of the first things I noticed that I was getting massively lower performance the Tweak Town was getting on their Optane Memory RAID 0 setup. Attempts to activate write back caching would cause the computer to freeze & crash just like reported here on Tom's Hardware. After much experimentation I was able to get write caching to work by installing version 16.7.0.1009 of the IRST driver/software I downloaded before it was pulled by intel from website & this version worked without crashing. I was still getting considerably lower performance than Tweak Town was with similar motherboard hardware. The main difference between my system & theirs was they had the 7700k processor & I only had 7700 non K version which is not overclockable. For the most part now I could determine that most of the difference was due to their overclocking to 5GHz as opposed to my 4GHz all core turbo clock. Even this though couldn't fully account for the performance difference at low que depths, that difference I believe is due to the spectre fixes which came out after the Tweak Town article.

I was able to boost performance a little more by turning off hardware prefetch in the bios, overclocking the memory from 2133 to 2666 then giving the processor the most blk overclock possible which was only to 102.5 MHz from 100 MHz. This boosted the memory to 2733 & the all core turbo clock to 4.1 GHz. With these settings & the IRST driver update I got substantially better performance which once compensated for Tweak Town's overclock was within the performance expected for the given clock speed except @ que depth 1 but at least it was now close. I also raised the Fclk fro 800MHz to 1 GHz. These all had small but important contributions to boosting the performance which ultimately according to Anvil storage benchmark went from just over 14000 score to almost 18000.

The lessen here is that these drive are incredibly sensitive to processor & memory clocking.

Other things noticed was that when comparing 4k sequential reads performance was almost identical to my old 960 pro from Samsung but when moving to do 4k random reads the Optane RAID setup totally obliterated the 960 pro by a factor of almost 6 @ 4k que depth 1.

Also turning off the write cache has mild effect on reads with optane & moderate effect on writes & turning off windows caching has no further negative effect but on the 960 pro turning off windows caching slows the drive in writes to the level of that of a standard HHD. It was hard to believe how bad the difference was with the 960 pro but ATTO made it very plain & clear, do not turn off windows caching for the 960 pro as writes are very severely effected, no effect on reads though.

Ron Brandt

 

germanium

Distinguished
Jul 8, 2006
93
9
18,565
I'm not going to say it is a huge noticeable improvement but it is pretty snappy but so was the Samsung 960 pro. This was largely an exercise to

1. Learn how to set up a PCI express RAID 0.

2. validate Tweak Town's performance claims of such a set up.

I am not rich. I am disabled due to spine issues. This was for me the cheapest way to test optane performance in my system. I was able to set this up for only 150 dollars due to the drives being on sale for $41.99 each + $12.99 for the adapter card before tax & shipping.

There is some loss of performance due to increase latency compared to Optane 900P & 905P at low Q depths but actually has better sequential reads than either of those Optane drives. Can't get around the low write performance though. I would say though that. random writes of small low que files is not too bad though. Definitely not class leading though.
 

germanium

Distinguished
Jul 8, 2006
93
9
18,565
I recently bought an Asus Strix Z390 motherboard & Intel Core I 9 9900K CPU & installed my Optane memory RAID 0 setup on it using the M.2's connected to the chipset & was rather miffed at the lackluster low que depth performance. Higher que depths definitely improved compared to my old Z270 chipset motherboard but the best I could get was about 20-25% less than the z270 at que depth 1 with 4K file sizes. That is with all the same speed tweaks as the Z270 but with mild overclocking on top of that even to little effect as far as improving that aspect of performance.

I finally noticed a setting in the BIOS that allowed installing multiple NVME drive to the X16 slot directly attached to the CPU PCI-E lanes. The max supported was 3 & I just so happened to have 3 Optane memory 32 GB modules. I quickly ordered an Asus Hyper M.2 X4 PCI-E X16 add in card but that order fell through as someone beat me to the punch as this was the last one in stock. So I ordered the Asrock Ulra Quad M.2 version of card that offered same functionality as the Asus card & it worked great. I was able to attach this with my Optane Memory modules to the top PCI-E slot that goes directly to the CPU.

There were some slight issues forcing me to delete & reinstall Windows. It would only install to one Optane module but I was able to get it to boot to Windows at which time I installed the IRST driver & software& was able to form my 3 drive RAID 0 setup from that. Of coarse that wasn't the only issue as the operating system still only see 27 GB of my RAID drive. I was able to fix that by going to the control panel/administrative tools/computermanagement/storage/disk management & right click on my c drive then click extend then allow it to use the whole unallocated drive space. This worked great now I had the whole 82 GB drive space available on my RAID 0 drive.

Now the question, what about speed & latency? Did it improve? The answer is a whole hearted yes. Both latency & speed though not everything improved. High que depth tests did not improve but neither did they lose much either. Latency & que depth one transfers improved dramatically as did sequential writes, things that are important to client computer usage. The thing that took a very slight hit was higher que depth transfers which are for the most part only applicable to servers & other proffessional uses, not client work loads.

For example Anvil storage benchmark gained over 2000 points pushing in some cases almost to 21,000 points where as on the Z390 chipset the score were in the mid to upper 18,000's. On the Z270 the score were in the mid 17,000's to almost 18,000 but had better que depth 1 latency than the Z390 chipset. CPU attached RAID 0 storage blows the others out of the water so to speak in overall performance & especially in client type work loads.

There is a couple flies in the ointment possible in terms of back up & restoring the operating system & THIS SETUP IS NOT FOR GAMERS capitals for emphasis as you cannot run a graphics card in the top 2 CPU attached PCI-E X16 slots with this RAID array in the top X16 slot unless you have an X299 Intel or Threadripper AMD setup.

The Intel x299 I understand is very much a can of worms for those of us who aren't rich or tech savvy. Not good for us poor common folks that just want to game. I'm not a gamer my self. I just like to experiment & learn how to do these types of things in case someone want help setting up such a system. The AMD Threadripper platform works well and is easier to configure as well as costing less but has significantly higher latency making it no better at reading small files than flash based drives. It does however do well elsewhere.

Found that doing backups & restores isn't quite as bad as I thought. As long as you go to the control panel & click recover it allows you to make a recovery disk that will recognize your CPU attached RAID drive. This disk has to be made after you have formed you CPU attached RAID inside windows using the Intel IRST driver. Other wise your CPU attached RAID drive will not be recognized.

Also don't bother forming the RAID drive inside the UEFI bios as that will not be recognized by windows install disk even if you have the F6 driver install disk. This install disk only works for chipset attached RAID drives for installing the RAID driver to earlier versions of windows. It is not needed at all for the latest versions of Windows 10.
 
Last edited:

germanium

Distinguished
Jul 8, 2006
93
9
18,565
Here are some of my tests from crystaldiskmark.


-----------------------------------------------------------------------
CrystalDiskMark 6.0.2 x64 (C) 2007-2018 hiyohiyo
Crystal Dew World : https://crystalmark.info/
-----------------------------------------------------------------------
  • MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
  • KB = 1000 bytes, KiB = 1024 bytes
Sequential Read (Q= 32,T= 1) : 4191.099 MB/s
Sequential Write (Q= 32,T= 1) : 895.659 MB/s
Random Read 4KiB (Q= 8,T= 8) : 1329.425 MB/s [ 324566.7 IOPS]
Random Write 4KiB (Q= 8,T= 8) : 841.640 MB/s [ 205478.5 IOPS]
Random Read 4KiB (Q= 32,T= 1) : 389.082 MB/s [ 94990.7 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 340.434 MB/s [ 83113.8 IOPS]
Random Read 4KiB (Q= 1,T= 1) : 178.410 MB/s [ 43557.1 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 235.278 MB/s [ 57440.9 IOPS]
Test : 1024 MiB [C: 41.1% (33.3/81.1 GiB)] (x5) [Interval=5 sec]
Date : 2019/02/17 5:29:27
OS : Windows 10 Professional [10.0 Build 17763] (x64)

This was with write back caching active.


-----------------------------------------------------------------------
CrystalDiskMark 6.0.2 x64 (C) 2007-2018 hiyohiyo
Crystal Dew World : https://crystalmark.info/
-----------------------------------------------------------------------
  • MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
  • KB = 1000 bytes, KiB = 1024 bytes
Sequential Read (Q= 32,T= 1) : 4235.293 MB/s
Sequential Write (Q= 32,T= 1) : 892.008 MB/s
Random Read 4KiB (Q= 8,T= 8) : 1361.053 MB/s [ 332288.3 IOPS]
Random Write 4KiB (Q= 8,T= 8) : 830.354 MB/s [ 202723.1 IOPS]
Random Read 4KiB (Q= 32,T= 1) : 402.775 MB/s [ 98333.7 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 359.665 MB/s [ 87808.8 IOPS]
Random Read 4KiB (Q= 1,T= 1) : 181.844 MB/s [ 44395.5 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 126.438 MB/s [ 30868.7 IOPS]
Test : 1024 MiB [C: 41.1% (33.3/81.1 GiB)] (x5) [Interval=5 sec]
Date : 2019/02/17 11:48:38
OS : Windows 10 Professional [10.0 Build 17763] (x64)

This without write back caching enabled.

Note how reads do slightly worse but writes do significantly better with write back caching enabled almost doubling the que depth 1 thread 1 4K writes.

My scores fore Anvil storage bench mark are as follows.

Read 14,039.84
Write 6,890.94
Overall 20,930.78

This is with write caching enabled

Here are scores without write caching enabled.

Read 14,070.74
Write 4,989.83
Overall 19,060.57

Please note that these scores are with the 3 Optane memory drives attached to the PCI-E 3.0 X16_1 bus that goes directly to CPU so there is no limit caused by the chipset connected bus which is only x4 compared to x16 which would limit you to just over 3,600MB/s due to overhead. On the CPU connected there is more than enough bandwidth to account for the overhead & hence is able to perform to the absolute limit of the bandwidth of all the drives combined which you can see by the sequential scores it does especially without write back caching enabled which writes directly to the storage media so no fakery here as you might get from having write back caching enabled in some situations such as using more than one thread on Crystaldiskmark sequential reads & writes. These can clearly exceed the true capability of the drive with write caching enabled.

Each Optane Memory drive is specked to transfer 1,350MB/s, take that times 3 & you get 4,050 MB/s. Since Intel usually specks their hardware some what conservatively It's reasonable that since we are not saturating the bus even at these speeds that they can even slightly exceed those specs which on Crystaldiskmark it does so handily. This is true even on the sequential reads without write caching which means there is no cache affecting the score positive or negative. These are real scores revealing the real limits of said drives when combined in RAID 0.

I would like to note that with the improvement of low que depth 4K random reads it can beat out the Optane 905P. 905P still beats mine though when you start add combined que depth & threads simultaneously. Mine wallops the 905P in sequential reads by 50%!!!!
 
Last edited:

germanium

Distinguished
Jul 8, 2006
93
9
18,565
Here is some tests that I ran at 16 treads on Crystal disk mark showing how with write caching enabled that sequential read & write scores are completely beyond the capability of the drives themselves.

-----------------------------------------------------------------------
CrystalDiskMark 6.0.2 x64 (C) 2007-2018 hiyohiyo
Crystal Dew World : https://crystalmark.info/
-----------------------------------------------------------------------
  • MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
  • KB = 1000 bytes, KiB = 1024 bytes
Sequential Read (Q= 32,T=16) : 6085.879 MB/s
Sequential Write (Q= 32,T=16) : 1593.874 MB/s
Random Read 4KiB (Q= 8,T=16) : 1779.159 MB/s [ 434365.0 IOPS]
Random Write 4KiB (Q= 8,T=16) : 850.542 MB/s [ 207651.9 IOPS]
Random Read 4KiB (Q= 32,T=16) : 1769.215 MB/s [ 431937.3 IOPS]
Random Write 4KiB (Q= 32,T=16) : 854.307 MB/s [ 208571.0 IOPS]
Random Read 4KiB (Q= 1,T=16) : 1410.077 MB/s [ 344257.1 IOPS]
Random Write 4KiB (Q= 1,T=16) : 835.878 MB/s [ 204071.8 IOPS]
Test : 1024 MiB [C: 41.0% (33.3/81.1 GiB)] (x5) [Interval=5 sec]
Date : 2019/02/18 2:31:08
OS : Windows 10 Professional [10.0 Build 17763] (x64)

Here are the scores without write caching enabled.

-----------------------------------------------------------------------
CrystalDiskMark 6.0.2 x64 (C) 2007-2018 hiyohiyo
Crystal Dew World : https://crystalmark.info/
-----------------------------------------------------------------------
  • MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
  • KB = 1000 bytes, KiB = 1024 bytes
Sequential Read (Q= 32,T=16) : 4230.752 MB/s
Sequential Write (Q= 32,T=16) : 894.339 MB/s
Random Read 4KiB (Q= 8,T=16) : 1870.433 MB/s [ 456648.7 IOPS]
Random Write 4KiB (Q= 8,T=16) : 846.163 MB/s [ 206582.8 IOPS]
Random Read 4KiB (Q= 32,T=16) : 1762.941 MB/s [ 430405.5 IOPS]
Random Write 4KiB (Q= 32,T=16) : 852.210 MB/s [ 208059.1 IOPS]
Random Read 4KiB (Q= 1,T=16) : 1481.746 MB/s [ 361754.4 IOPS]
Random Write 4KiB (Q= 1,T=16) : 762.634 MB/s [ 186189.9 IOPS]
Test : 1024 MiB [C: 41.0% (33.3/81.1 GiB)] (x5) [Interval=5 sec]
Date : 2019/02/18 2:36:57
OS : Windows 10 Professional [10.0 Build 17763] (x64)

As you can see without write caching enabled the sequential scores stay within the capabilities of the drives themselves even when being pumped by 16 threads. Note also that the 4k read scores actually improve without write caching enabled. This is an extremely capable drive setup!!!!

Disabling write caching on a flash based SSD cause them to fall excruciatingly low level equivalent to a HDD on writes, no effect on reads though, whereas the Optane drives actually improve on the 4K reads under the same conditions. Even writes do not totally fall on it's face with the Optane drive setup though performance does take some hit without write caching, however much less so than flash drives.

A note about some tests that I did on my girlfriends computer with her permission of coarse which has a Samsung 960Pro SSD writes without write back caching enabled were barely over 1MB/s 4K QD1 Tread 1 making the scores for the Optane drive 100x faster under the same relative conditions. With write bac caching it scores really quite well on writes but still suffer from high latency on reads @ 4 QD1 T1 scoring a paltry 32MB/s compared to Optane RAID 0 scores of 178 to 240 depending on program testing such. This is between almost 6 to almost 8x faster.
 
Last edited:

germanium

Distinguished
Jul 8, 2006
93
9
18,565
Some minor issues that I had with the Asrock Ultra Quad M.2 add in adapter.

#1. the heat sink pads did not come in contact with most of the surface of the Optane memory chips. The Optane memory controller chip & power supply controllers sit even lower & had no contact what so ever. So not only did the TIM pads not provide the needed cooling for such a high performance setup but it restricted airflow to the chips that needed it the most.

2. The fan noise though not too loud had a high pitched whine to it that though barely audible was still a source of annoyance. This was with the fan running half speed.

3. The utility program to control the fans does not work. This is true even to most of Asrock's own X299 motherboards it was designed for.

4.There is no drive activity light functionality. This is not the fault however of the Asrock card but of the Optane Memory module as there is no connection to the traces that allow such to happen.

Because of these issues I had some thermal throttling of the Optane controller chip.

How I addressed these issues.

I tried running with the cover off but still had throttling. I then took the cover & removed the heavy chunk of metal holding the TIM heat transfer strips & put the cover back on. This reduced the restriction to air flow overall & much like a vacuum cleaner due to less restriction to air flow the fan had a heavier load causing it to slow down some. Now all I can hear is a slight air rushing sound, no more high pitch whine. This improved the air flow around the chips & eliminated the thermal throttling. The heavy metal pad was held on by a huge even stickier thermal pad which made it somewhat difficult to remove but I was able to get a screw driver under it & pry it up enough to eventually get my fingers under it & at this point it came off fairly easy.

Reports of the cover bulging when M.2 drives are installed are utterly false given my experience especially given that it wasn't even touching most of the chips. Even if they were making full contact there is no flex to the heavy metal plate the thermal pads were attached to so no bulging could have occurred. Correction. However unlikely if the chips on the card were significantly thicker they could force the metal tangs to bend causing it to not sit flush to the add in card as they should. These metal tangs that the screws screw into are not very strong & could bend if the chips were sufficiently thick but I see it as unlikely if the chips were designed to the specs of the M.2 standards. Unfortunately the Optane controller chip was not but it went to the opposite end of the thickness spectrum being too thin.
 
Last edited:

germanium

Distinguished
Jul 8, 2006
93
9
18,565
Last edited:

germanium

Distinguished
Jul 8, 2006
93
9
18,565
All in all when connected to the CPU 4K Que Depth 1 Thread 1 performance is definitely class leading on the read side at least even compared to Optane 905p if 905p is connected to chipset. Haven't seen any tests yet of single 905p when connected to CPU. It might beat this setup but probably not by much if at all. From tests I have seen mine does beat 4-905p's connected to CPU at 4K QD1 T1 performance when connected to CPU through VROC by wide margin. 4-905p's win everywhere else though by very wide margin. This does not say how single 905p will perform on CPU connected PCI-E lanes.
 
Last edited:

germanium

Distinguished
Jul 8, 2006
93
9
18,565
Found out that if I disable hyperthreading I get major improvements with 4K random reads at all que depths & threads on Crystaldiskmark.

-----------------------------------------------------------------------
CrystalDiskMark 6.0.2 x64 (C) 2007-2018 hiyohiyo
Crystal Dew World : https://crystalmark.info/
-----------------------------------------------------------------------
  • MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
  • KB = 1000 bytes, KiB = 1024 bytes
Sequential Read (Q= 32,T= 1) : 4024.741 MB/s
Sequential Write (Q= 32,T= 1) : 896.324 MB/s
Random Read 4KiB (Q= 8,T= 8) : 2278.841 MB/s [ 556357.7 IOPS]
Random Write 4KiB (Q= 8,T= 8) : 841.099 MB/s [ 205346.4 IOPS]
Random Read 4KiB (Q= 32,T= 1) : 476.613 MB/s [ 116360.6 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 419.810 MB/s [ 102492.7 IOPS]
Random Read 4KiB (Q= 1,T= 1) : 190.184 MB/s [ 46431.6 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 248.821 MB/s [ 60747.3 IOPS]
Test : 1024 MiB [C: 42.6% (34.6/81.1 GiB)] (x5) [Interval=5 sec]
Date : 2019/03/06 19:28:59
OS : Windows 10 Professional [10.0 Build 17763] (x64)


This is without write back caching, top was with write back caching.

-----------------------------------------------------------------------
CrystalDiskMark 6.0.2 x64 (C) 2007-2018 hiyohiyo
Crystal Dew World : https://crystalmark.info/
-----------------------------------------------------------------------
  • MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
  • KB = 1000 bytes, KiB = 1024 bytes
Sequential Read (Q= 32,T= 1) : 3964.629 MB/s
Sequential Write (Q= 32,T= 1) : 895.433 MB/s
Random Read 4KiB (Q= 8,T= 8) : 2298.167 MB/s [ 561075.9 IOPS]
Random Write 4KiB (Q= 8,T= 8) : 832.123 MB/s [ 203155.0 IOPS]
Random Read 4KiB (Q= 32,T= 1) : 494.378 MB/s [ 120697.8 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 432.189 MB/s [ 105514.9 IOPS]
Random Read 4KiB (Q= 1,T= 1) : 190.319 MB/s [ 46464.6 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 131.205 MB/s [ 32032.5 IOPS]
Test : 1024 MiB [C: 42.6% (34.6/81.1 GiB)] (x5) [Interval=5 sec]
Date : 2019/03/06 19:49:34
OS : Windows 10 Professional [10.0 Build 17763] (x64)

Note the slight improvement in reads except sequential with out write back caching still remains. Without hyper threading sequential reads suffer slightly. That part does not entirely make sense though as the sequential part only uses 1 thread as does the 4K QD1 & QD32 which actually improve.

Note also that the IOPs have now exceeded the rating for the Optane 900P almost reaching 905P rating. Also you rarely see any device reach or exceed the IOPs rating. It typically takes very high que depth & thread count to do so. Higher than those used in this situation.

Found a test of 905P & with these settings Optane RAID 0 beats the 905P in every area except 4K QD1. Not sure if this test was done before or after Spectre fixes. If before then performance would drop some after fixes on the 905P. Test found on Hot Hardware site. https://hothardware.com/reviews/intel-optane-ssd-905p-review?page=6
 
Last edited:
While you have that build working you should test if you can deploy a ton of vms using a huge amount of swap on those. a lot of vms are really memory hungry but not really resource intensive otherwise. i had been considering trying a 900p for this. i think digital ocean might even have a line of VPS that use optane memory.
 

germanium

Distinguished
Jul 8, 2006
93
9
18,565
I have no interest in running VM's at the moment. Turning on virtualization in bios had a very strong negative impact particularly on multithreaded IO according to Anvil storage benchmark. 4K 16 thread dropped from over 1700 to just over1000. Huge drop!! The 1700 was before I disabled Hyperthreading which had a positive outcome in terms of threaded IO.

Interestingly while Crystaldiskmark shows huge improvement Anvil does not at all, weird.
 

germanium

Distinguished
Jul 8, 2006
93
9
18,565
Finally cracked 21,000 on anvil storage bench


Did some more testing with Crystal disk mark. Top with write caching disabled, bottom without.

-----------------------------------------------------------------------
CrystalDiskMark 6.0.2 x64 (C) 2007-2018 hiyohiyo
Crystal Dew World : https://crystalmark.info/
-----------------------------------------------------------------------
  • MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
  • KB = 1000 bytes, KiB = 1024 bytes
Sequential Read (Q= 32,T= 1) : 3937.344 MB/s
Sequential Write (Q= 32,T= 1) : 894.412 MB/s
Random Read 4KiB (Q= 8,T= 8) : 2299.167 MB/s [ 561320.1 IOPS]
Random Write 4KiB (Q= 8,T= 8) : 845.087 MB/s [ 206320.1 IOPS]
Random Read 4KiB (Q= 32,T= 1) : 484.475 MB/s [ 118280.0 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 422.177 MB/s [ 103070.6 IOPS]
Random Read 4KiB (Q= 1,T= 1) : 203.110 MB/s [ 49587.4 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 249.304 MB/s [ 60865.2 IOPS]
Test : 1024 MiB [C: 43.9% (35.6/81.1 GiB)] (x5) [Interval=5 sec]
Date : 2019/03/19 1:20:06
OS : Windows 10 Professional [10.0 Build 17763] (x64)

-----------------------------------------------------------------------
CrystalDiskMark 6.0.2 x64 (C) 2007-2018 hiyohiyo
Crystal Dew World : https://crystalmark.info/
-----------------------------------------------------------------------
  • MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
  • KB = 1000 bytes, KiB = 1024 bytes
Sequential Read (Q= 32,T= 1) : 4220.341 MB/s
Sequential Write (Q= 32,T= 1) : 894.483 MB/s
Random Read 4KiB (Q= 8,T= 8) : 2436.218 MB/s [ 594779.8 IOPS]
Random Write 4KiB (Q= 8,T= 8) : 831.273 MB/s [ 202947.5 IOPS]
Random Read 4KiB (Q= 32,T= 1) : 480.166 MB/s [ 117228.0 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 415.221 MB/s [ 101372.3 IOPS]
Random Read 4KiB (Q= 1,T= 1) : 197.728 MB/s [ 48273.4 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 131.171 MB/s [ 32024.2 IOPS]
Test : 1024 MiB [C: 43.9% (35.6/81.1 GiB)] (x5) [Interval=5 sec]
Date : 2019/03/19 1:26:36
OS : Windows 10 Professional [10.0 Build 17763] (x64)

Peak read IOP's exceeds Optane 905P rated IOPs on one without write caching enabled.

Amazing performance when you disabled Hyperthreading & write caching on the reads. Without write caching indicates true performance capability of controller+media write performance as well as true read performance overall. With write caching will falsely indicate data written to media before it is actually written to the media. However some read tests are negatively impacted by write caching by adding slight overhead.
 
Last edited:

germanium

Distinguished
Jul 8, 2006
93
9
18,565
Upgraded Tuesday to 3 Optane 800P 118GB in RAID 0. Read performance slightly improved. Write performance improved up to 2X+ times. Retains & adds to all the good characteristics of the Optane Memory RAID drive plus have lots more storage now on my C drive, about 4 times in fact but paid huge price money wise. Almost $700 but performance is on another level on writes & get lots more storage to boot Should easily match or exceed 900p in most categories except ultimate space but even that's not far off for the price compared to 900p M.2 drive which I am sure this will perform better in most cases.
 

germanium

Distinguished
Jul 8, 2006
93
9
18,565
Had a weird occurrence today. Sequential performance dropped substantially by about 500+ MB/s. I tried almost everything to get it back. I had just installed a 3 drive RAID 0 setup attached to the PCH chipset & copied all my data from the D drive to it which was about 1.5 TB then did some comparison tests & noticed the huge drop on all sequential accesses.

I went into BIOS & tried changing several setting to no avail so I changed them back. I finally disabled the Hyper x 16 card rebooted & immediately turned the Hyper x 16 card feature back on & that fixed it. Only problem is I had already deleted the 3 drive RAID setup that was attached to the PCH chipset so I had to recopy all that data back after reforming the RAID drive which I'm doing right now. At least I got the Optane RAID 0 performance back to where it was.

When the performance dropped it was more like it was limited to a PCIE x4 connection than having 6 PCIE lanes at work. Now it is all back.

The drives I am using for the PCIE RAID that is attached to the PCH chipset are Samsung 970 Evo Plus drives. These are quite fast in their own right but have substantially higher latency than my optane RAID setup that is attached to the CPU. That will be staying where it is. Even though the Samsung drives are faster on a sequential basis what the operating system needs is low latency so the Optane RAID setup is staying on the PCIE x16 attached to the CPU. that means the speed of the Samsung drives will be limited to the speed available on the PCH bus which is only 4 x wide. They are still very fast there & this will not effect the performance for its intended purpose.
 
Last edited:

germanium

Distinguished
Jul 8, 2006
93
9
18,565
Update on the cause of the performance drop. It was due to an overclock that was too high on the memory. I had it overclocked from 2400 to 3000 with voltage at 1.350 with a 100/100 bus ratio. I have since reduced the overclock to 2933 @1.200 volts with the bus ratio set to100/133. this overclock works very well & has a nice overall performance increase in all areas, not huge mind you but definitely there. Apparently this memory does not like voltages over the stock 1.200 volts. It is very stable at 2933 with 1.2 volts.
 

germanium

Distinguished
Jul 8, 2006
93
9
18,565
Concerning the fan noise of the Asrock Ultra Quad M.2 card I was able to reduce that somewhat by soldering in 4-1ohm resistors in series to the power line going to the fan. This reduced the noise to a more tolerable level & still pumps quite a lot of air over the SSD modules avoiding any issues with overheating or throttling.

All total these drives only draw less than 12 watts total.