is defrag optimized ?

labdog

Distinguished
Feb 17, 2001
2,747
0
20,780
Hi,

i use defrag on a 40GB hd each week during the night.
i have to run & rerun it 5,6 or even 7 times to have a really complete defrag hd.
this takes around 24 to 36 hours processing.
im wondering if its really optimized ?

moreover, <i>defrag</i> use to stop (after around 2 or 3 hours processing) before complete its work & informs my hd is 25% to 30% yet fragmented so i have to rerun it more & more...
whats the matter about ? a <i>defrag</i> bug or a <i>Lab.</i> bug ? :)

i would like to precise my OS is XP Pro/NTFS & background tasks & services are stopped before.

<i>note: i use to create/delete big files (between 1GB to 9GB) for video capturing.</i>

Comments, Opinions about ?

thanks.


if you know you don't know, the way could be more easy ...
 
I usually defrag my drive once and then it still has fragmented files, so I reboot and do it again and it seems to defragment everything.

I used that Vopt defrag program on my other system and didn't seem to have the same problem, so it could just be the windows defragment software?

<font color=red>People and hard drives are like bandwagon fans and sports!</font color=red>
 
I have the same defrag issues labdog.... you aren't alone!

In WinXP with NTFS if my drive gets decently fragmented (which isn't hard for me to do), it will start defragging then just stop at a certain point... like it gave up...

I have gotten a drive so badly fragmented that I could have run defrag continuously forever and I don't think it would have been able to defrag it with the method/algorithm it uses. (I had to copy all of the files off the partition to another partition and then copy them back to defragment the files)

There are other defrag programs out there you can try... but I don't know if they will perform much better?

<font color=green><b>More salt than just a grain you will need with posts of mine. - Yoda©®</b></font color=green>
 
I'm running W2k and I use Diskkeeper 7.0 which is fast and does a good job. It even does a better job in boot time defragmenter by running chkdisk first, then defragging directories, page files, and MFT tables. I think it is much better than the default deframenter in Windows.

<b><font color=red>Cast your vote with your $,</b></font color=red> <b><font color=blue>shed your pride with your opinion.</b></font color=blue>
 
LabDog, I also use Diskkeeper 7.0 and am happy with it. It usually requires only one pass.
It is from Executive Software and the lite version is the one that comes with Win2k
and WinXP. It works faster on NTFS than any other defrager I have tried.


:smile: <b>You get what you pay for...all advice here is free.</b> :smile:
 
I use I use Diskkeeper 5.0 98se fat32 80gig and 2 40gig = 160gig & Diskkeeper 7.0 2k NTSF on all my servers up to 320gig....very fast....


<font color=blue>Join THG UD TEAM WE NEED YOUR HELP NOW</font color=blue>
 
A few things I've found can decrease fragmentation.

Set a minimum on your swapfile. This prevents minor resizing.

Don't use the "check drives for errors". Do a scandisk instead before defrag.

Don't use the "programs start faster" option. Although this may improve one or often used apps, this increases fragmentation.

Always do a disk clean up, delete any temp files in the TEMP folder. Don't trust disk clean up to get all the temp files.

Partitioning your drives into smaller partitions with PT Magic changes the size of the clusters on the partitions which can improve overall storage of files and optimise cluster wastage. Partition size magic numbers are:

1MB, 2MB, 4MB, 8MB etc.

<b><font color=blue>~ Whew! Finished...Now all I need is a Cyrix badge ~ </font color=blue> :wink: </b>
 
in fact for clusters size i think is

to large with fat32
to low with ntfs

i think a 16K size is a good average one to fit small files size & huge files size.

there is not a lot hd space waste but also minus fragmented files.

what do you think about ?


if <b>you know</b> <font color=white>you don't know<i><font color=black>, the way could be more easy ...<font color=red>
 
I'll just correct my last post. I stated 2MB, 4MB etc. Its Gigabytes!!! not Megabytes. :smile: You all knew what I meant.

I tend to let PT Magic assign cluster size and leave it at that. If I have loads of free space on a partition which I don't intend to add much data to, I resize it.

PT Magic sets 4K cluters on partitions up to 8205MB (8.2GB). I have my C: set at 5.6GB (I merged two to get this, and haven't gotton round to dealing with it). 2.7GB used, 67MB cluter waste. Around 3%, which isn't too bad.

Not too sure about the speed issues with cluster size. I find smaller partitions = faster drive operation.

<b><font color=blue>~ Whew! Finished...Now all I need is a Cyrix badge ~ </font color=blue> :wink: </b>
 
i think high clusters size involves less clusters by file, speedily it is.

this could explains for a part why fat32 is rather more fast than ntfs (apart of ntfs features)


if <b>you know</b> <font color=white>you don't know<i><font color=black>, the way could be more easy ...<font color=red>
 
In reply to:
labdog
= Dudde.

lol.

Thanks to visit this World Wide Website
wtf/roflao/get a job

Owl

<font color=blue>Join THG UD TEAM WE NEED YOUR HELP NOW</font color=blue>
 
sorry if you didnt understood the Summary. lol

but i cant do nothing for you. :)

Lab-

if <b>you know</b> <font color=white>you don't know<i><font color=black>, the way could be more easy ...<font color=red>