[SOLVED] Bogus SMART Errors?

Jun 27, 2020
4
0
10
So I have 3 drives (Toshiba X300 8TB) in a RAIDZ5 array.
Two disks are relatively new, only been operating for ~500 hours.
The third drive is older, with 9250 hours.

The two new drives are throwing SMART errors for Parameter 0x07 SEEK ERROR RATE.
sometimes during heavy server operations, the VALUE (on both) will slowly go down to 001, and then while idling (say overnight) it will go back up to 100.
These two drives keep going between FAILING_NOW and IN_THE_PAST for the WHEN_FAILED flag

The Older drive has gotten down to a VALUE of 54, and hasn't "failed"

Now, the part I find confusing.... is that the "Raw Value" on all 3 drives still says 0...
As in... it hasn't recorded any seek errors?

Can anyone explain this?
How is it having a bad seek error rate, when there are no seek errors?

Also the 9000+ hour drive has been operating perfectly for 3 years... so I'm finding it strange that all 3 drives experience this simultaneously???

Could this be attributed to the multiple VM's all mounting the NFS pool simultaneously and fighting for read/write control?
 
Solution
value of 001 is fishy indeed
value of 100 = 1 seek error in 10B seeks, 90 = 1 error every 1B, 80 = 1/100M, 70 = 1/10M, ... 40= 1/10k, 30 = 1/1k...etc
seek error rate has only performance impact, no data integrity or drive going bad stuff, it just didnt seek right track on first try and had to do one(or two) more spin(s) to find it
ser gets recomputed at 1M seek mark
value of 001 is fishy indeed
value of 100 = 1 seek error in 10B seeks, 90 = 1 error every 1B, 80 = 1/100M, 70 = 1/10M, ... 40= 1/10k, 30 = 1/1k...etc
seek error rate has only performance impact, no data integrity or drive going bad stuff, it just didnt seek right track on first try and had to do one(or two) more spin(s) to find it
ser gets recomputed at 1M seek mark
 
  • Like
Reactions: seanmcg182
Solution
Jun 27, 2020
4
0
10
value of 001 is fishy indeed
value of 100 = 1 seek error in 10B seeks, 90 = 1 error every 1B, 80 = 1/100M, 70 = 1/10M, ... 40= 1/10k, 30 = 1/1k...etc
seek error rate has only performance impact, no data integrity or drive going bad stuff, it just didnt seek right track on first try and had to do one(or two) more spin(s) to find it
ser gets recomputed at 1M seek mark

it's just weird that all 3 drives are doing it simultaneously (2 new, and 1 thats been functioning fine for 3 years)
And especially since the RAW_VALUE reports 0 on all of them, which i think indicated that there hasn't been a seek error lol

But i am glad to have it reinforced that theres no data integrity issues