bmxjumperc

Distinguished
Jul 9, 2009
104
0
18,690
What is everyones opinion on upgrading my hard drive with the new SSDs now in play and the fact that Windows 7 is optimized for SSDs. Should I just go inexpensive and get a or a few VelociRaptors or is it a really good idea to wait and save for an SSD?
 

505090

Distinguished
Sep 22, 2008
1,575
0
19,860
imo
ssd aren't worth the money as of yet. Velociraptor for os and programs paired with a black for storage is the best performance without wasting money
 

klezmer41

Distinguished
Aug 3, 2006
49
0
18,530
What's the heat/loudness and lifetime of the Velociraptor? Would I be sacrificing in those areas for a bit of extra speed?
 


i would still say go for a Western Digital Black 640GB x 2, it would be plenty fast enough
 

sub mesa

Distinguished
Well i would say heat is an issue. That's why Western Digital don't want to sell these Velociraptors without 3.5" heatsink caddy to retail channels. Only server-markets like the blade-market will get bare disks, since these casings are cooled properly and the size of the heatsink is an issue.

Its usage isn't that high though, only 4W in idle, but 4W in a tiny spot without cooling would add up and may overheat the drive. That's why they come with heatsink. Regular 2.5" disks use only 0.7W in idle. They are quite silent, though.
 

505090

Distinguished
Sep 22, 2008
1,575
0
19,860
actually they come with a "heatsink" because they are 2.5 drives and desktops typically use 3.5 and a mounting bracket was required. If heat was an issue they wouldn't be used in a server environment. Further the velociraptor uses less power and generates less heat than a caviar black which is one of the two most recommended drives the other being the seagate .12
 

klezmer41

Distinguished
Aug 3, 2006
49
0
18,530
I just bought new stuff for a new computer and I have a 1TB Caviar Black. But I was planning on using that for storage and putting the OS on another 250GB drive that I already have. Will I see a much better performance if I get a Velociraptor or Caviar Black? Or even... SSD???
 

brendano257

Distinguished
Apr 18, 2008
899
0
18,990
Think of it this way:

In graphics, you can spend very little and get the job done. You can spend a "midranged" price and get good performance for the price, OR you can spend an outrageous ammount of money for that little extra performance over the midrange (in the large scheme, and slightly exaggerated, but you get the point.)

So, you can get a 7200 RPM Caviar Black and get the job done fine and nice, or you can opt for the Velociraptor and get better performance and spend more, or you can go all out and get an SSD, it's your choice. However always think what you need more, speed, or capacity, you can't have both really, and the Velociraptor is the balance, the SSD is high perf, low capacity, and the 640 Caviar Black is your high capacity, "low" performance, although you won't see a huge difference between a Caviar Black and Velociraptor. Honestly either a single 640 Caviar Black, or 2 of them is your best price/performance option.
 
G

Guest

Guest
Just remember all reviews show v-raptor just a few MS (mil-a-second) faster than a 640 black or 640 AAKS.

A MS is a blink of an eye, so blink your eyes 8-10 times that’s how much quicker a v-raptor is to a black
or AAKS.

1 second = 1000 ms
 

First, an eyeblink is a lot slower than a millisecond (from what I can find, a typical blink is 300ms or so). Second, the Velociraptors have roughly a 7ms seek time compared to the Caviar Black's 12. That is only 5ms, but keep in mind that your hard drive is doing thousands of operations for even a relatively simple task. So, the better way to look at it is that the Velociraptor will do more than 1.5 times the operations per second if they are of a somewhat random nature (such as loading an OS). It is a definitely noticeable difference. On the other hand, large sequential transfers (such as moving a single large file) will not be noticeably faster. This is also why SSDs can feel so fast and responsive - they do not have significantly higher transfer rates than a typical high performance hard drive setup, but they have access times of under 1ms compared to a velociraptor's 7ms.
 

One thing I'd be very interested to see is a performance comparison of a pair of RAID-1 650 Blacks vs. a single Velociraptor. It seems to me like you might be able to get similar or possibly even better performance for a cheaper price by taking advantage of motherboard RAID.
 

apollyon0810

Distinguished
Jul 11, 2009
13
0
18,510
It all comes down to how much money you want to spend. You should get 4 SSD's and put them in RAID 0. 128GB SSD's are pretty reasonably priced now-a-days. The Intel drives are still crazy expensive for what little you're getting over other makes. With 4 drives, capacity shouldn't be an issue. You should still keep a couple internal or external 1TB drives for storage.
 

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360
To save some money for now, buy the Caviar Blacks or RE3 series and
do some cost-effective optimizations:

Even with lots of RAM, Windows XP will still swap out a program
e.g. when the minimize button is clicked on an open window.

To shorten a HDD's armature strokes as much as possible,
XP's pagefile.sys should be assigned to the lowest numbered
sectors on its own hard drive, NOT on the C: system partition.

This can be accomplished by running the Contig program
on a newly formatted primary partition: remember also to
switch OFF XP's Indexing Service on that dedicated partition
to achieve a completely contiguous swap file:

contig -v -n D:\pagefile.sys 2048000000

attrib D:\pagefile.sys +A +S +H

Then, move this swap file from C: using the proper sequence
in "My Computer".

The Contig software is freeware, available from the Internet.


A related freeware program is PageDefrag, but
it does not always defrag pagefile.sys
(for reasons explained in the on-line documentation);
hence the need to resort to the Contig software
instead of or in addition to PageDefrag.


Another very cost-effective optimization is to
download and install RamDisk Plus 9.0.4.0
from www.superspeed.com .

This version allows ramdisks to be created
in unmanaged Windows memory with the
32-bit version of XP/Pro.

After creating one or more ramdisks,
move your browser cache(s) to that
memory-resident partition and
enjoy the incredible speed-up that results.

Off-loading your spinning platters will also
help them to last longer, particularly
laptops HDDs spinning at 5,400 rpm.


In the long run, it makes more sense to conserve
money now, because SATA/6G is the newest standard,
and flash SSDs are the only file system devices which
come even close to saturating a SATA/3G interface
(with the exception of SDRAM-based storage).

That's because THE FASTEST SPINNING PLATTER is still moving
data past the read/write heads at no more than
150MB/second e.g. SAS HDDs spinning at 15,000 rpm
with perpendicular magnetic recording ("PMR").

Look for SSD storage that will exceed the former
SATA/3G bandwidth -- probably in 3-6 months,
when those motherboards and RAID controllers
become more widely available.

The SSD manufacturers are in a heated race
for market supremacy right now, and the next
leaders will be those who offer real throughput
exceeding 300MB/second over SATA/6G channels.

Similarly, the USB 3.0 standard is 5Gbps, which
is right behind SATA/6G in raw speed.


To catch a glimpse of this future:

Google i-RAM +4X +"RAID 0"


MRFS
 

08nwsula

Distinguished
Mar 23, 2007
218
0
18,680


do you mean raid 0?
 
How much space do you need, and for that type of data?

For the OS and program drive, you need 32gb plus program room. 64gb-80gb as a starter, and up to 160gb.
For this, you want primarily fast random read and write of small blocks, since that is what the OS does much of.

The SSD is matchless in the read part, but most SSD's struggle with overload on random writes. Among the MLC( much cheaper than SLC) drives, the Intel X25-M is the only one that seems to be issue free. That is changing, as cache is added to buffer writes, and ssd's become cheaper. We might see major advances by the end of the year. Waiting on a SSD would be good for the value concious. As a early adopter I have two X25-m's in raid-0, and they are performing well. I initially got one, but it was not big enough. I tried to sell it on e-bay, and go back to my velociraptor, but could not get my price. I came across a secone one at a good price so I got it. Primarily so I could get a single 160gb image for my OS, applications, and data.
At $300 each, they are pricey.

The velociraptor is a very good drive. From experience, it is quiet, cool, and fast. Here is a link to some benchmarks comparing it to other drives, including 15k server drives. It is second only to a fast SSD:
http://www.storagereview.com/php/benchmark/bench_sort.php
It is priced about $200

The WD caviar black 1tb drive is a very good solution also. The fastest 10% of the drive is close to the velociraptor in performance.
At $100 it is a bargain performance drive, particularly if you only use the fastest 10% for high performance needs.

What to get?
If your os and all your live data will fit on a single fast drive, go that way if you can. I find managing multiple drives to be a pain, except for backup.
If you will have large amounts of data, like video files, then get a fast OS drive, and one or more 1tb drives for storage.

There is generally no real world(vs. synthetic transfer rate benchmarks) performance advantage to raid of any kind.
Go to www.storagereview.com at this link: http://faq.storagereview.com/tiki-index.php?page=SingleDriveVsRaid0
There are some specific applications that will benefit, but
gaming is not one of them. Even if you have an application which reads one input file sequentially, and writes
it out, you will perform about as well by putting the input on one drive, and the output on the other.

If you have the funds, remember a saying:
"the bitterness of the product is remembered long after the sweetness of the price is forgotten"

Hope this helps.
 



No, I mean RAID-1. I wouldn't be interested in the reliability tradeoff you have with RAID 0.

Both RAID-0 and RAID-1 benefit from being able to handle nearly twice as many read I/O requests per second, which is the most important thing for shortening load times.
 

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360
Even though this factor may sound overly simplistic,
we also pay attention to the ratio of cost / warranty.

In most instances we have examined, that ratio
is LESS for HDDs with 5-year warranties.

If a HDD with a 3-year warranty fails in year 4,
that problem is quite different from a failed HDD
that enjoys a 5-year warranty.


MRFS
 

To get some of the benefit from multiple concurrent reads with raid, you will need a separate hardware based raid controller, not the mobo type.

Fortunately hard drives do not fail often.
Mean time to failure is claimed to be on the order of 1,000,000 hours.(100 years)
Raid-1 does not protect you from other types of losses such as viruses,
software errors,raid controller failure, operator error, or fire...etc.
For that, you need EXTERNAL backup.
If you have external backup, and can afford some recovery time, then you don't need raid-1 and you don't have to worry about increased risk of raid-0.
 
I keep seeing references to motherboard RAID not being "hardware". This is something I've been trying to research and understand, but I haven't found what I consider to be a definitive answer yet.

I did find this page which suggests the ICH10R chipset itself doesn't perform the RAID functions but merely acts as a repository for configuration information which is then used by the ICH10R OS drivers to do the actual work. The fact that motherboard RAID is not widely available in most Unix flavours supports this hypothesis. But it would be really nice to see a good, technical description of the ICH10R RAID implementation...

But that having been said, even a fully OS-level implementation of RAID is certainly capable of issuing simultaneous I/O requests to multiple drives. There's nothing in "software RAID" to preclude this. Indeed, Windows can issue multiple I/O simultaneous I/O requests not only to multiple drives, but to each individual drive. This is why the "queue length" of drives gets to levels above "1" on busy disks.
 

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360
RAID parity computations are performed more efficiently
on dedicated hardware-based RAID controllers e.g. Areca,
Intel's IOP348.

With the dawn of multi-core CPUs, however, the idle
cores end up doing the same work with almost identical
speed, and without frequent scheduling interrupts.

Some of the newest RAID controllers have dual-cores too:

http://www.intel.com/design/iio/iop348.htm

Two Intel XScale® processors for optimized performance

* High-performance RAID system-on-a-chip with an integrated 3 Gb/s SAS/SATA II controller
* Fourth generation Intel XScale® processor with core speeds up to 1200 MHz and 512 KB L2 cache
* 8 port, 3 Gb/s SAS/SATA engine supporting industry standard SSP, STP, SMP, and direct attached SATA
* Hardware RAID 6 acceleration with near RAID 5 performance
* Pin compatibility with Intel® IOP341 I/O processor, Intel® IOP342 I/O processor, Intel® IOC340 I/O Controller, Emulex IOP 504 I/O processor*, Emulex IOP 502M I/O processor*, and Emulex IOC 504 I/O Controller*
* Emulex's Service Level Interface (SLI*) technology providing a driver compatible API
* Multi-ported 400/533 MHz DDR2 memory controller supporting up to 2 GB of 64-bit ECC protected memory
* Three application DMA units with XOR, RAID 6 P+Q, CRC32C
* Dual- or single-interface PCI-X* and PCI-Express* host bus interface options
* Dual 128-bit/400 MHz internal buses, providing over 12 GB/s internal bandwidth



MRFS
 

sub mesa

Distinguished
Yes, but these IOP processors aren't really comparable to general purpose CPU's like the usual AMD/Intel chips.

Everybody thinks calculating parity is slow, but they don't know even a 25 dollar single core cpu can do about 3-5GB/s of parity calculations. So this argument is not very valid.

1) the CPU is much more powerful than the IOP in the Areca hardware controller. If you got good software, there is no way for Areca to beat the host-system, because the host-system is ALOT faster.

2) although the IOP isn't as fast, its dedicated to its task and smart firmware tuned for performance

3) since no advanced RAID functionality exists for Windows, you are basically stuck with Hardware RAID

4) there is nothing hardware RAID can do what software can't, performance wise.

In theory, software RAID is even superior because its more flexible and 100% hardware independent.
 

sub mesa

Distinguished