Question SSD linux Disks benchmark on bootdrive with data?

FreeBee101

Reputable
Jul 20, 2020
16
2
4,515
Is it safe to run a benchmark on an ssd that is your boot drive or will it corrupt data? I have a program on linux called disks. It show most data for drives and has a benchmark program. If I run it on the drive with data I'm booting from can or will it destroy things or does it use empty drive space? And can this hurt SSD's? I have an 850 pro 256 gb drive.


View: https://i.imgur.com/QKHTy6a.png


It appears to just be a read test. Still not sure. I'm assuming I don't want to hit the write button.
 
Last edited:

Lutfij

Titan
Moderator
You should use Samsung Magician's built in benchmarking tool as opposed to the utility you're looking at, provided Samsung's app doesn't work on your platform. You should be fine as is, but for the sake of reference, it's always a good idea to back up your critical content in case something goes sideways.
 

FreeBee101

Reputable
Jul 20, 2020
16
2
4,515
I ran the test. It was only 10mb and it's read only by the looks of it. I'm assuming it doesn't place it in a space that uses existing info. I think other tests on linux make a file first. Hopefully it doesn't place it in a critical position. So far it hasn't done harm.
 

FreeBee101

Reputable
Jul 20, 2020
16
2
4,515
Does anyone know the benefits of raid 100 over raid 10. I have the minimum 6 disks. Would it provide or could it be setup to provide benefits over raid 10?

This is my current raid. I'm not sure which raid it is anymore. I think it was an easy raid 10. Not sure what that is technically.

Code:
/dev/md10:
           Version : 1.2
     Creation Time : Thu Dec 18 09:28:15 2014
        Raid Level : raid10
        Array Size : 732199296 (698.28 GiB 749.77 GB)
     Used Dev Size : 244066432 (232.76 GiB 249.92 GB)
      Raid Devices : 6
     Total Devices : 6
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Wed Jul 22 23:01:04 2020
             State : active
    Active Devices : 6
   Working Devices : 6
    Failed Devices : 0
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 128K

Consistency Policy : bitmap

              Name : ****************
              UUID : ****************
            Events : 8795306

    Number   Major   Minor   RaidDevice State
       0       8        0        0      active sync set-A   /dev/sda
       1       8       64        1      active sync set-B   /dev/sde
       2       8       32        2      active sync set-A   /dev/sdc
       7       8       16        3      active sync set-B   /dev/sdb
       4       8       80        4      active sync set-A   /dev/sdf
       6       8       48        5      active sync set-B   /dev/sdd

I also found something on a raid 14, but I couldn't find any details on how it works. Or wasn't understanding it. Still reading actually. Trying to get my brain to absorb the info.

http://www.sabi.co.uk/blog/13-two.html#131213

Something also mentioned a raid 10 that can take 3 failure and still remain running. I wonder if that is the one I'm using.

BTW, I have the raid disks now, but I'm waiting for the backup drive to arrive so I can remove the data from the raid to start testing. It will be here tomorrow.

I think this raid has had up to 3 or more disks, "fail," but they resynced after. Might have been a software issue at the time or something making them break from the raid. Disks still worked and they all synced up afterwords. This raid has a weird long history.

Editagain: I've kind of figured out what raid 4 and 14 are. Does anyone know the different in performance between raid 10 and it's variants and raid 100? I have the minimum 6 disks for it. (not sure if it's 6 disks now. I'm reading back and forth info.)

Also, raid 10 on mdadm can apparently combine near and offset. I wonder if that helps performance.

https://en.wikipedia.org/wiki/Non-standard_RAID_levels#LINUX-MD-RAID-10

It is also possible to combine "near" and "offset" layouts (but not "far" and "offset").[12]

https://linux.die.net/man/8/mdadm

Unless I missed something, like usually, crappy linux documentation says endless words without saying anything useful. Couldn't find a single detail on it. Gotta love people without proper technical writing abilities. Particularly when they think they should be working on technical subjects and cant' see the problem with that.

Even the notation doesn't go into it: http://www.ilsistemista.net/index.p...ar-and-offset-benchmark-analysis.html?start=1

I'm looking at far and offset because that notation link says it gets full read. want full read because it would match the max performance of my ssd meaning I can transfer up to the next layer at full speed. This allow potential max transfer when I want to transfer things to my ssd from my raid which could be nice.

https://serverfault.com/questions/3...lure-tolerance-of-md-raid10-with-n2-f2-layout

This says you can combine near and far...
 
Last edited: