Hello,
Introduction/edit: I'm not sure why it looks like I wrote this in a way the is misleading people to believe that they need to give me advice about some recurring corruption problem. This is not what my question is about. The question of the topic, as in bold further in the text is:
Considering the context described in this post: Does using larger cluster size increase the risk of having larger parts of files getting corrupted?
(And also, please read the whole thread before replying...)
I have very large archive drives with terabytes of data. Everything is in .rar files.
Fonr information and context, they are archives so my priorities are:
I think that I observed a performance gain on drives using larger cluster sizes (most of my drive have the default 4K cluster size, but some use 2M cluster size).
Actually I think that the performance gain was when copying files rather than testing them.
Sometimes, but very rarely, some file appears corrupted even though disk appears OK. (And this is not the purpose of the topic, I get very few corrupted files so I'm not really concern about it.)
Currently, I actually assume that since I mostly have very large files, I can benefit from a large cluster size, but there may be a lot of things I don't really know in what happens behind the scene.
But here is something I'm wondering:
When using a larger cluster files, is there a risk that more bytes get corrupted at the same time?
Because WinRar has "recovery record" (equivalent to parity files/data) which help repair a file when it's damaged, but it can only repair file within a certain amount of corrupted data, depending of the size allowed to recovery record; if too much data is corrupted, it won't work.
So when using large cluster size, do I increase the risk of having larger parts of files getting corrupted? (In cases of "bad cluster" or so.)
Thank you
Introduction/edit: I'm not sure why it looks like I wrote this in a way the is misleading people to believe that they need to give me advice about some recurring corruption problem. This is not what my question is about. The question of the topic, as in bold further in the text is:
Considering the context described in this post: Does using larger cluster size increase the risk of having larger parts of files getting corrupted?
(And also, please read the whole thread before replying...)
I have very large archive drives with terabytes of data. Everything is in .rar files.
Fonr information and context, they are archives so my priorities are:
- Longevity/integrity/preservation (critical)
- Testability (critical)
- Cost (money)
- Performance (just nice to have, but tasks may take very long so not to be completely ignored)
I think that I observed a performance gain on drives using larger cluster sizes (most of my drive have the default 4K cluster size, but some use 2M cluster size).
Actually I think that the performance gain was when copying files rather than testing them.
Sometimes, but very rarely, some file appears corrupted even though disk appears OK. (And this is not the purpose of the topic, I get very few corrupted files so I'm not really concern about it.)
Currently, I actually assume that since I mostly have very large files, I can benefit from a large cluster size, but there may be a lot of things I don't really know in what happens behind the scene.
But here is something I'm wondering:
When using a larger cluster files, is there a risk that more bytes get corrupted at the same time?
Because WinRar has "recovery record" (equivalent to parity files/data) which help repair a file when it's damaged, but it can only repair file within a certain amount of corrupted data, depending of the size allowed to recovery record; if too much data is corrupted, it won't work.
So when using large cluster size, do I increase the risk of having larger parts of files getting corrupted? (In cases of "bad cluster" or so.)
Thank you
Last edited: