Questions about short stroking

Dougx1317

Distinguished
Aug 2, 2008
919
0
19,010
If I bought a Samsung F1 HD753LJ, what would short stroking look like? I've read that it has 334GB platters. Would I end up with ~70GB or what? What kind of performance would this give? What type of applications does short stroking shine at? Is it mainly for OS and programs?
 
It will reduce latency / access time / seek time by placing files closer together. The beginning of your harddrive is very fast, but the end is about half the performance. So making one partition of 20% of the capacity of the disk would give you these benefits.
 
"Short stroking" doesn't refer to using just one platter, it refers to using the outermost tracks of ALL the platters.

There are some posts about how to short-stroke the drive by modifying the drive's firmware - but that's totally unnecessary. All you need to do is to create a partition that's smaller than the whole disk. The first partition you create will be located on the outermost tracks, and the smaller you make the partition, the better performance you'll get. It's better because (a) you're using the outermost tracks, which have higher transfer rates, and (b) by putting everything in a small partition you're preventing the heads from having to move across very many tracks to access data.

So what you want to do is to create a partition that's just large enough to hold what you want and not very much larger. But of course there's a trade-off between getting the partition as small as possible, and having some extra space available just in case you need to install more programs or store more data.

Short stroking will benefit most types of programs, but it won't work miracles. It doesn't reduce the disk's rotational latency, which is usually the biggest portion of access time. You need to go to a disk that spins faster (i.e., Velociraptor) for significant improvements, or to an SSD which has no mechanical delay if you want to really rip.
 
If I have my 320GB drive seperated into 60GB on C: and the rest in a second partition, is that what you mean? Or do I have to leave the rest of it blank?

Would I see more performance from a short stroked 750GB down to 75GB or a Velociraptor drive?
 
You can have a second partition yes, but to benefit from short stroking, you shouldn't both use the first and second partition at the same time.

For example, BAD situation:
C (60GB) -> Windows
D (~ GB) -> main storage disk

This will not help if you're using both C and D at the same time; it would have to seek and you loose any benefit. But:

C (60GB) -> windows
D (~ GB) -> long-term backups

This would be good. Normally the D is not used only to store some backups so its like 'cold storage'. The C is the only active partition here. This will benefit as the harddrive only has to be at the beginning of the disk (the first 60GB) and everything is pretty close to eachother.
 
So, it's okay to have data on the second partition as long as I don't access it frequently? Should I move my most frequently accessed data to my OS/Programs partition?

I've seen a lot of benchmarks about short stroking, but none of them seem representative of what I'd actually have or be doing. Would it make sense to get the 750GB disk and use 75GB for OS/Programs and use the 320GB for data? Would that give me fast load times and such?
 
The key is to keep the fastest stuff as close to the outside as possible. If you don't want to waste any of the space, that might mean, depending on your preferences:

C:\OS
D:\Swap file and temp files (FAT 32 as NTFS protections and overhead not needed)
E:\Active Games
F:\Programs
G:\Data (and inactive games)
H:\Backups

On XP for example, that might be C=16 GB / D=8 GB / E= 64 GB / F=64 GB / G = 512 GB or so / H = whats left

Games can be swapped back and forth from E to G



 
Do you have to make 6 different partitions? Won't the OS automatically go to the first spot and and everything else follow in the order that you install it? So, unless I start installing programs after I upload my data, there shouldn't be a problem.
 
Oh please no, multiple partitinos force the head to move over longer distances because the data is forced to separate. Besides 6 partitions is going to be a mess; not enough space everywhere. So soon you'll be breaking your own rules and putting data on other partitions. I find this a very inflexible setup.

Defragmenting apps can place most used data at the beginning of the disk. Using many partitions can only complicate that.
 
Is there a program that can benchmark the speeds of individual partitions on the same disk? I've heard that there is one, but I can't find it. That way I could see the effect short stroking would have before I try it.
 
Well, most benchmarks I've seen are for RAID, raptors, servers, ect. Is there any way to tell what kind of performance I'd have from a 750GB with a single 75GB partition? I'm just trying to way my options between a Samsung 750GB shortstroked, an 80GB velociraptor, or two 500GB in RAID 0. Performance is the important thing.
 
It depends on what kind of performance you want. If you're talking about boot and program load times, you'll be looking for good random I/O performance. In that case, the Velociraptor will be your best bet.

If you're looking at high transfer rates (useful for bulk copying or editing large files like video or RAW pictures from a digital camera), then RAID 0 will probably be best.

Whether you use one partition or six (which is way too much effort for too little gain, IMHO), don't expect that much difference from short stroking a drive.
 


You can make as many as is practical for the way you work. Yes, the OS will automatically go just like you said....so as long as you never let Windows Update do it's thing your logic works.....but when a 1.1 MB windows file has to be replaced with a 1.2 Mb file a year from now, where does the extra 0.1 MB go....it goes at the end of your drive after everything else.
 


I have used it on over 120 builds in the last 17 years, most of them hi end CAD boxes where time is money in the real sense of the word. Inflexible ? Partition sizes can be changed anytime you want (See programs like Partition Magic, Acronis, etc. Haven't broken the rule on a single box yet.

And heads don't move over longer distances....it's single partitions that cause the most head movement, most of that from a fragmented page file that has pieces all over the disk. Let's examine what happens in the course of part of one's day.

-OS boots .... all files are in the 1st 5 % of the disk and always will be. No chance of Windows update placing all those new files at the 867 GB mark of your 1 TB drive.
-Swap file remains also on the 1st 5% of the disk where performance is and always will be in the 100+ MB/sec category.
-Just installed the new version of Photoshop.....mines in the 1st 20% of the HD an always will be. Yours is well, yours is all over the place.....old version was in the early part of the drive but after you removed and reinstalled it, now half is in the beginning and half is at the end so your heads are moving like a tennis ball at Wimbledon just trying t open the program.
-I open that 10 Mb file of Jessica Biel. It's at the end of the Hard drive so it takes longer than it would at the front. But as I work on the file retouching it for the enxt hour, my heads will never visit that slow end of the disk. As I manipulate it and it uses swap and temp files, mine are at the fastest 5% of the disk whereas yours are all over the place....not only having to deal with the slow parts of the disk but also the head movement. Not only that, having the swap and tmp files on a small FAT32 partition eliminates the NTFS overhead....NTFS file protections are invaluable but not for swap and temp files.
-Smaller sequentially read MFT's lead to better overall performance. What can the OS search faster 1 TB or 100 MB ?


 
What sub mesa is absolutely correct. It's got absolutely nothing to do with your experience. It is simply fact that with HDD (SSD is slightly different) you do not separate swapfile, temp and program installations to any other partitions to keep stroking distance to the minimal, reducing latency thereby increase overall throughput and IOps.

You can however divide infrequently access storage portion of the same HDD to whatever partitions you like with the benefit of mainly easier defragmentation as you said.
 
Lots of good observations: my 2 cents ...

There are 2 electro-mechanical factors which cause a rotating
HDD to slow down measurably whenever the read/write
armature must travel far from its resting position outside
the outermost track:

(1) the sheer distance of the "stroke" directly affects
the amount of clock time it takes for the armature
to accelerate from its current position and then
to decelerate to its new position: just view the
manufacturer's specs comparing "track-to-track"
and "outer-to-inner" times;

(2) also, in order to maintain at or near the same
recording density, the actual data density decreases
from outer tracks to inner tracks: this decrease is
directly related to the change in track velocity
from outer to inner tracks: inner tracks are
necessarily shorter in circumference = PI x diameter
(PI = ~ 3.14159).

The latter change in recording density can be
as much as 50%, depending on overall platter
geometry and the number of tracks per platter.

By comparison, platters with shorter diameters,
like those used in 10,000 and 15,000 rpm HDDs,
don't show as much of a decrease in raw speed
from outer to inner tracks, simply because there
is less of a difference between outer and inner
circumferences.


Both of these factors therefore recommend that
the most recently used files be stored at the
outermost tracks, whenever possible, and the
least recently files be stored at the
innermost tracks.

For pure performance reasons, the first partition
on a RAID 0 array should be sized to something
like 10-20% of overall capacity: thus, on a RAID 0
array with 2 x HDDs, a 50GB partition will actually
create 2 x 25GB physical partitions, resulting in
short strokes on each physical HDD.

Similarly, on a RAID 0 array with 4 x HDDs,
a 40GB partition will actually create 4 x 10GB physical partitions,
resulting in even shorter strokes on each physical HDD.

Coupled with a large HDD cache on each
physical HDD, those caches are effectively multiplied
by the number of HDDs in each RAID array e.g.
2 x HDDs @ 16GB cache = ~32 GB of total cache;
2 x HDDs @ 32GB cache = ~64 GB of total cache;
4 x HDDs @ 16GB cache = ~64 GB of total cache; etc.


MRFS
 
64GB cache? Might as well remove the platters and just rely on the cache getting power then. 😀

Seriously you gave us a good read.

Aside from these issues; if there's a difference between option A and option B. How large does the difference in performance have to be for us to care? If you're making your own life difficult by trying to separate data because you heard its faster, but it only adds marginally to performance; i'd rather just have convenience without the small performance gain.

I'd say you need at least 20-50% performance increase before it becomes humanly noticeable. I guess my point is that people should balance their "micro-managing" of performance with convenience. If an optimization costs you alot of convenience and doesn't yield any significant performance benefit, generally i'd opt against it.

So if people are trying desperately to use 6 partitions one for each purpose thinking it would have been faster. Even it it were; it probably wouldn't be enough to warrant investing so much time and trouble in an inflexible setup.
 


The number of partitions is solely personal choice. I just used 6 to make a case at the far end of the spectrum. Each user should pick what works best for them. Only unfamiliarity makes it unflexible. Takes about two minutes to change partition sizes. But by far the main reason people do partition is organization AND convenience. A single giant space does notjhing for organization and maintenance.

As an analogy I might ask would it be more convenient to store all your clothes all in one big drawer ? Or will you find things faster if you organize them ? Here's some other convenience issues to consider.

Do you want to schedule a 2 TB partition to defrag during your lunch break ? Or do you have a better chance of getting it done in 500 GB chunks ?

Backup is probably the biggest organizational factor. Is there any reason to backup your programs daily ? Seems like a lot of network traffic and / or tape wear and tear. So isn't a better choice to back up the Data partition daily, and the programs partition say monthly or after a new install / update ? Making image backups, these get made 'by partition".

Get your OS fudged ? What takes longer and is more effort .... reformatting your entire drive and reinstalling all your data ? With a small OS partition you can use your imaging program to restore C, more than likely, no problem with anything on the other partitions ... even if that isn't so, restoring from ya backups in windows is a lot easier than boot disk based solutions.

Dual booting is far more useful when you can isolate that 2nd OS on it's own partition. Have a family PC ? Nice to have your OS, programs and / or data on a partition that the kiddies have no access to. Have small office network and wanna share files ? Do you pick series of files and folders on each box or just give access to different classes of users by partition ? On an NAS this is normally done by allowing access to users by "volumes".

Yes, the concept does take some initial thought and planning and you are bound to make some wrong assumptions when doing it for the 1st time. But making partitions is no more complicated than buying those three ring punched sheets with the colored tabs for your school notebooks that kept you from mixing up your social studies notes w/ your math homework.


 
One question that was never quite answered: How much of the hard drive needs to be short stroked to get the advantage? If I have an 80GB hard drive, would short stroking it to about 35GB help? Does it make a difference if the rest of the drive is blank anyway? Also, do I leave the rest unpartitioned or in a different partition?
 
The smaller the partition, the bigger the performance advantage. But again, don't expect miracles - the biggest part of access time is rotational latency and short stroking doesn't alter that at all.

If you put additional partitions onto the same drive, your performance will be poorer the more you access files in those partitions. In fact, forcing files into different partitions can result in worse performance if they get accessed a lot.

If you use only one partition, or if you use multiple partitions that don't fill up the entire drive, it's easiest to just leave the rest of the drive unparititioned.
 
Remember that since W98, the defragmenter keeps together the files used at the same time, and Xp improves on that. So the effect of forcing such files on one single volume isn't that radical.

Yes, if accesses were spread randomly within the whole volume, a small volume would improve access time an awful lot. This can be checked for instance with Winbench99:
http://www.benchmarkhq.ru/english.html?/be_common.html
or with IOMeter (defragment after creating the test file!)
http://sourceforge.net/projects/iometer
This advantage improves until the rotational latency (4.17ms at 7200rpm) dominates - but Ncq reduces this latency provided you programmes can take advantage of Ncq, possibly through Xp's prefetch.
 
The MFT (Master File Table) is located in the middle of file system, so even for a fully defragmented file system the head will have to move further in a large partition than in a small one at least some of the time.

But I do agree that the real performance differences are probably smaller than a lot of people are imagining...
 


What defragmenter ? A 3rd Party solution generally has an effect as it's scheduled to run regularly but manually defragmenting just doesn't really get done that often.

I know what the marketing speak says but this "together" thing doesn't really happen to any great extent. It might "try" to do that but generally only does so when space is available next to to a related file. If you want to test this, uninstall several large programs or games from your system (1 at a time) that have been sitting there for a long time and analyze the drive afterwards. You will see empty spaces all over the place indicating that these files were not next to each other.
 
Avoid third-party defragmenters, as most are bad. They only keep single files in one fragment, but don't group files that required at the same time, as Win does.

Putting help files at one location and executables at another would be perfectly acceptable for instance. I've even read that Xp's defrag in fact may cut single files in pieces and interleave the pieces of several files if these fragments are requested in an order that fits better. No idea if this is true.

-----

Mft at the centre of a volume: yes. Xp improves it a bit by placing it at 1/3 of the volume, nearer to the files when the volume isn't filled, and fills a volume beginning near the Mft instead of at the outer tracks. Well done.

The distance to the Mft neither is so tragic, because the whole Mft soon sits in the Ram. Avoiding unnecessary writes to the Mft, by setting NtfsDisableLastAccessUpdate, should improve - I still haven' tried. It should be more useful to Flash disks and CF, since write accesses are so slow there.

-----

An interesting effect of small specialized volumes is that Win puts its mess in its volume but the applications volume stays neatly ordered for a long time, as little happens there. And then, defragging 5GB for the Win volume is quickly done.