SCSI vs SATA Hih-Perf

G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

Hello all,

Which of the two following architectures would you choose for a
high-perf NFS server in a cluster env. Most of our data ( 80% ) is
small ( < 64 kb ) files. Reads and Writes are similar and mostly random
in nature:

Architecture 1:
Tyan 2882
2xOpteron 246
4 GB RAM
2x80Gb SATA ( System )
2x12-Way 3Ware Cards
24 73 GB 10k rpm Western Digital Raptors
Software RAID 10 on Linux 2.6.x
XFS

Architecture 2:
Tyan 2881 with Dual U-320 SCSI
2xOpteron 246
4 GB RAM
2x80Gb SATA ( System )
12x146Gb Fujitsu 10k SCSI
Software RAID 10 on Linux
XFS

The price for both system is almost the same. Considerations:

- Number of Spindles: Solution 1 looks like it might have an edge here
for small sequential reads and writes since there are just twice as
many spindles.

- PCI Bus Saturation: Solution 1 also appears to have an edge in case
we use large sequential reads. Solution 2 would be limited by the Dual
SCSI bus bandwidth 640Gb. I doubt we would ever reach that level of
bandwidth in any random-read or random-write situation and in our small
random file scenario I think both system would perform equally. Any
comments ?

- MTBF: Solution 2 has a definite edge. Some numbers:

MTBF1= 1 / ( 24* 1/1.2million + 2/1million ) = 45454.54 hours

Raptor MTBF = 1,200,000 hours; 3Ware MTBF = 1,000,000 hours

MTBF2= 1 / ( 12* 1/1.2million ) = 100,000 hours

Not surprisingly Solution 2 is twice as reliabe. This doesn't take
into account the novelty of the SATA Raptor drive and the proven track
record of the SCSI solution. In any case comments on this MTBF point
are welcomed.

- RAID Performance: I am not sure about this. In principle both
solution should behave the same since we are using SW RAID but I don't
know how the fact that SCSI is a bus with overhead would affect RAID
performance ? What do you think ? Any ideas as to how to spread the
RAID 10 in a dual U 320 SCSI Scenario ?
SATA being Point-To-Point appears to have an edge again but your
thoghts are welcomed.

- Would I get a considerable edge if I used 15k SCSI Drives ? I am not
totally convinced that the SATA is our best choice. Any help is greatly
appreciated.

Many thanks,

Parsifal
 

peter

Distinguished
Mar 29, 2004
3,226
0
20,780
Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

> Hello all,
>
> Which of the two following architectures would you choose for a
> high-perf NFS server in a cluster env. Most of our data ( 80% ) is
> small ( < 64 kb ) files. Reads and Writes are similar and mostly random
> in nature:
>
> Architecture 1:
> Tyan 2882
> 2xOpteron 246
> 4 GB RAM
> 2x80Gb SATA ( System )
> 2x12-Way 3Ware Cards
> 24 73 GB 10k rpm Western Digital Raptors
> Software RAID 10 on Linux 2.6.x
> XFS
>
> Architecture 2:
> Tyan 2881 with Dual U-320 SCSI
> 2xOpteron 246
> 4 GB RAM
> 2x80Gb SATA ( System )
> 12x146Gb Fujitsu 10k SCSI
> Software RAID 10 on Linux
> XFS
>
> The price for both system is almost the same. Considerations:
>
> - Number of Spindles: Solution 1 looks like it might have an edge here
> for small sequential reads and writes since there are just twice as
> many spindles.

Yes, but Raptors have 226 IO/s vs. Fujitsu 269 IO/s.

> - PCI Bus Saturation: Solution 1 also appears to have an edge in case
> we use large sequential reads. Solution 2 would be limited by the Dual
> SCSI bus bandwidth 640Gb. I doubt we would ever reach that level of
> bandwidth in any random-read or random-write situation and in our small
> random file scenario I think both system would perform equally. Any
> comments ?

You are designing for NFS, right? Don't forget that network IO and
SCSI IO are on the same PCI-X 64bit 100MHz bus. Therefore available
throughput will be 800MB/s * 0.5 = 400MB/s

In random operations, if you get 200 IO/s from each SCSI disk,
you will have 12disks * 200 IO/s * 64KB = 154MB/s

> - MTBF: Solution 2 has a definite edge. Some numbers:
>
> MTBF1= 1 / ( 24* 1/1.2million + 2/1million ) = 45454.54 hours
>
> Raptor MTBF = 1,200,000 hours; 3Ware MTBF = 1,000,000 hours
>
> MTBF2= 1 / ( 12* 1/1.2million ) = 100,000 hours

How did you calculated your total MTBF???
Your calcs maybe good for RAID0 but not for RAID10.

Assuming 5 year period, for 1,200,000 hour MTBF disk
reliabilty is about 0.964.

For RAID10 (stripe of mirrored drives) in 6x2 configuration
eqivalent MTBF will be 5,680,000 hours

Assuming 5 year period, for 1,000,000 hour MTBF disk
reliabilty is about 0.957.

For RAID10 (stripe of mirrored drives) in 12x2 configuration
eqivalent MTBF will be 2,000,000 hours

For a single RAID1 of the 1,000,000 hr MTBF drives
equivalent MTBF will be 23,800,000 hours

BTW, 3Ware controllers are PCI 2.2 64bit 66MHz.
I can't believe that their MTBF is so low (1,000,000 hr)
I you loose one, probably your RAID will go down too.

> Not surprisingly Solution 2 is twice as reliabe. This doesn't take
> into account the novelty of the SATA Raptor drive and the proven track
> record of the SCSI solution. In any case comments on this MTBF point
> are welcomed.
>
> - RAID Performance: I am not sure about this. In principle both
> solution should behave the same since we are using SW RAID but I don't
> know how the fact that SCSI is a bus with overhead would affect RAID
> performance ? What do you think ? Any ideas as to how to spread the
> RAID 10 in a dual U 320 SCSI Scenario ?
> SATA being Point-To-Point appears to have an edge again but your
> thoghts are welcomed.
>
> - Would I get a considerable edge if I used 15k SCSI Drives ?

In theory up to 40%.

> I am not
> totally convinced that the SATA is our best choice.

Agree.

> Any help is greatly
> appreciated.
>
> Many thanks,
>
> Parsifal
>
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

Arno Wagner wrote:
> In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:

>
> One thing you can be relatively sure of is that the SCSI controller
> will work well with the mainboard. Also Linux has a long history of
> supporting SCSI, while SATA support is new and still being worked on.
>
> For you access scenario, SCSI will also be superior, since SCSI
> has supported command queuing for a long time.
>
> I also would not trust the Raptors as I would trust SCSI drives.
> The SCSI manufacturers know that SCSI customers expect high
> reliability, while the Raptor is more a poor man's race car.


My main concern is their novelty, rather then their performance. Call
it a hunch but it just doesn't feel right to risk it while there's a
proven solid SCSI solution for the same price.

>
> One more argument: You can put Config 2 on a 550W (redundant)
> PSU, while Config 1 will need something significantly larger,

Thanks for your comments. I forgot about the Power. Definitely worth
considering since we're getting 3 of these servers and UPS sizing
should also play in the cost equation.


> also because SATA does not support staggered start-up, while
> SCSI does. Is that already factored into the cost?

This I don't follow, what's staggered start-up ?

Parsifal



>
> Arno
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

Peter wrote:
[ Stuff Deleted ]
> > - Number of Spindles: Solution 1 looks like it might have an edge
here
> > for small sequential reads and writes since there are just twice as
> > many spindles.
>
> Yes, but Raptors have 226 IO/s vs. Fujitsu 269 IO/s.

Yeap ! I like those Fujitsus and they are cheaper then the cheetahs.

>
> > - PCI Bus Saturation: Solution 1 also appears to have an edge in
case
> > we use large sequential reads. Solution 2 would be limited by the
Dual
> > SCSI bus bandwidth 640Gb. I doubt we would ever reach that level of
> > bandwidth in any random-read or random-write situation and in our
small
> > random file scenario I think both system would perform equally. Any
> > comments ?
>
> You are designing for NFS, right? Don't forget that network IO and
> SCSI IO are on the same PCI-X 64bit 100MHz bus. Therefore available
> throughput will be 800MB/s * 0.5 = 400MB/s

Uhmm .. you're right. I guess I'll place a dual e1000 on the other
PCI-X
channel. See:

ftp://ftp.tyan.com/datasheets/d_s2881_100.pdf


>
> In random operations, if you get 200 IO/s from each SCSI disk,
> you will have 12disks * 200 IO/s * 64KB = 154MB/s
>
> > - MTBF: Solution 2 has a definite edge. Some numbers:
> >
> > MTBF1= 1 / ( 24* 1/1.2million + 2/1million ) = 45454.54 hours
> >
> > Raptor MTBF = 1,200,000 hours; 3Ware MTBF = 1,000,000 hours
> >
> > MTBF2= 1 / ( 12* 1/1.2million ) = 100,000 hours
>
> How did you calculated your total MTBF???
> Your calcs maybe good for RAID0 but not for RAID10.

Thanks for the correction. You're right again.

>
> Assuming 5 year period, for 1,200,000 hour MTBF disk
> reliabilty is about 0.964.
>
> For RAID10 (stripe of mirrored drives) in 6x2 configuration
> eqivalent MTBF will be 5,680,000 hours
>
> Assuming 5 year period, for 1,000,000 hour MTBF disk
> reliabilty is about 0.957.
>
> For RAID10 (stripe of mirrored drives) in 12x2 configuration
> eqivalent MTBF will be 2,000,000 hours
>
> For a single RAID1 of the 1,000,000 hr MTBF drives
> equivalent MTBF will be 23,800,000 hours

Excuse my ignorance but how did you get these numbers ? In any case
your numbers show that MTBF with solution 1 is about 1/2 than solution
2.

>
> BTW, 3Ware controllers are PCI 2.2 64bit 66MHz.
> I can't believe that their MTBF is so low (1,000,000 hr)
> I you loose one, probably your RAID will go down too.

I thought it was a bit too low too but there was no info on the 3ware
site.

>
> > Not surprisingly Solution 2 is twice as reliabe. This doesn't
take
> > into account the novelty of the SATA Raptor drive and the proven
track
> > record of the SCSI solution. In any case comments on this MTBF
point
> > are welcomed.
> >
> > - RAID Performance: I am not sure about this. In principle both
> > solution should behave the same since we are using SW RAID but I
don't
> > know how the fact that SCSI is a bus with overhead would affect
RAID
> > performance ? What do you think ? Any ideas as to how to spread
the
> > RAID 10 in a dual U 320 SCSI Scenario ?
> > SATA being Point-To-Point appears to have an edge again but your
> > thoghts are welcomed.
> >
> > - Would I get a considerable edge if I used 15k SCSI Drives ?
>
> In theory up to 40%.

In reality though I would say 25-35%

>
> > I am not
> > totally convinced that the SATA is our best choice.
>
> Agree.

Thanks !

>
> > Any help is greatly
> > appreciated.
> >
> > Many thanks,
> >
> > Parsifal
> >
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

J. Clarke wrote:
> Arno Wagner wrote:
>
> > In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:
> >> Hello all,
> >
> >> Which of the two following architectures would you choose for a
> >> high-perf NFS server in a cluster env. Most of our data ( 80% ) is
> >> small ( < 64 kb ) files. Reads and Writes are similar and mostly
random
> >> in nature:
> >
> >> Architecture 1:
> >> Tyan 2882
> >> 2xOpteron 246
> >> 4 GB RAM
> >> 2x80Gb SATA ( System )
> >> 2x12-Way 3Ware Cards
> >> 24 73 GB 10k rpm Western Digital Raptors
> >> Software RAID 10 on Linux 2.6.x
> >> XFS
> >
> >> Architecture 2:
> >> Tyan 2881 with Dual U-320 SCSI
> >> 2xOpteron 246
> >> 4 GB RAM
> >> 2x80Gb SATA ( System )
> >> 12x146Gb Fujitsu 10k SCSI
> >> Software RAID 10 on Linux
> >> XFS
> >
> >> The price for both system is almost the same. Considerations:
> >
> >> - Number of Spindles: Solution 1 looks like it might have an edge
here
> >> for small sequential reads and writes since there are just twice
as
> >> many spindles.
> >
> >> - PCI Bus Saturation: Solution 1 also appears to have an edge in
case
> >> we use large sequential reads. Solution 2 would be limited by the
Dual
> >> SCSI bus bandwidth 640Gb. I doubt we would ever reach that level
of
> >> bandwidth in any random-read or random-write situation and in our
small
> >> random file scenario I think both system would perform equally.
Any
> >> comments ?
> >
> >> - MTBF: Solution 2 has a definite edge. Some numbers:
> >
> >> MTBF1= 1 / ( 24* 1/1.2million + 2/1million ) = 45454.54 hours
> >
> >> Raptor MTBF = 1,200,000 hours; 3Ware MTBF = 1,000,000 hours
> >
> >> MTBF2= 1 / ( 12* 1/1.2million ) = 100,000 hours
> >
> >> Not surprisingly Solution 2 is twice as reliabe. This doesn't
take
> >> into account the novelty of the SATA Raptor drive and the proven
track
> >> record of the SCSI solution. In any case comments on this MTBF
point
> >> are welcomed.
> >
> >> - RAID Performance: I am not sure about this. In principle both
> >> solution should behave the same since we are using SW RAID but I
don't
> >> know how the fact that SCSI is a bus with overhead would affect
RAID
> >> performance ? What do you think ? Any ideas as to how to spread
the
> >> RAID 10 in a dual U 320 SCSI Scenario ?
> >> SATA being Point-To-Point appears to have an edge again but your
> >> thoghts are welcomed.
> >
> >> - Would I get a considerable edge if I used 15k SCSI Drives ? I am
not
> >> totally convinced that the SATA is our best choice. Any help is
greatly
> >> appreciated.
> >
> > One thing you can be relatively sure of is that the SCSI controller
> > will work well with the mainboard. Also Linux has a long history of
> > supporting SCSI, while SATA support is new and still being worked
on.
>
> If he's using 3ware host adapters then "SATA support" is not an
> issue--that's handled by the processor on the host adapter and all
that the
> Linux driver does is give commands to that processor.
>
> Do you have any evidence to present that suggests that 3ware RAID
> controllers have problems with any known mainboard?
>
> > For you access scenario, SCSI will also be superior, since SCSI
> > has supported command queuing for a long time.
>
> I'm sorry, but it doesn't follow that because SCSI has supported
command
> queuing for a long time that the performance will be superior.
>
> > I also would not trust the Raptors as I would trust SCSI drives.
> > The SCSI manufacturers know that SCSI customers expect high
> > reliability, while the Raptor is more a poor man's race car.
>
> Actually a Raptor is an enterprise SCSI drive with an SATA chip on it
> instead of a SCSI chip on it. The Raptors aren't "poor man's"
_anything_,
> they're Western Digital's enterprise drive. WD has chosen to take a
risk
> and make their enterprise line with SATA instead of SCSI. Are you
> suggesting that WD is incapable of producing a reliable drive?
>
> If it was a Seagate Cheetah with an SATA chip would you say that it
was
> going to be unreliable?
>
> > One more argument: You can put Config 2 on a 550W (redundant)
> > PSU, while Config 1 will need something significantly larger,
> > also because SATA does not support staggered start-up, while
> > SCSI does. Is that already factored into the cost?
>
> Uh, SATA requires one host interface for each drive. Whatever
processor is
> controlling those host interfaces can most assuredly stagger the
startup if
> that is an issue.
>
> Not saying that SCSI is not the superior solution but the reasons
given seem
> to be ignoring the fact that a "smart" SATA RAID controller is being
> compared with a "dumb" SCSI setup.


Good point. Would the SCSI performance improve if I used a Dual U-320
super duper SCSI RAID card ? Since the RAID was going to be in SW
anyways I didn't see the reason of getting such a card. I had no other
choice with the SATA solution though.

Parsifal

>
> > Arno
>
> --
> --John
> to email, dial "usenet" and validate
> (was jclarke at eye bee em dot net)
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:
> Hello all,

> Which of the two following architectures would you choose for a
> high-perf NFS server in a cluster env. Most of our data ( 80% ) is
> small ( < 64 kb ) files. Reads and Writes are similar and mostly random
> in nature:

> Architecture 1:
> Tyan 2882
> 2xOpteron 246
> 4 GB RAM
> 2x80Gb SATA ( System )
> 2x12-Way 3Ware Cards
> 24 73 GB 10k rpm Western Digital Raptors
> Software RAID 10 on Linux 2.6.x
> XFS

> Architecture 2:
> Tyan 2881 with Dual U-320 SCSI
> 2xOpteron 246
> 4 GB RAM
> 2x80Gb SATA ( System )
> 12x146Gb Fujitsu 10k SCSI
> Software RAID 10 on Linux
> XFS

> The price for both system is almost the same. Considerations:

> - Number of Spindles: Solution 1 looks like it might have an edge here
> for small sequential reads and writes since there are just twice as
> many spindles.

> - PCI Bus Saturation: Solution 1 also appears to have an edge in case
> we use large sequential reads. Solution 2 would be limited by the Dual
> SCSI bus bandwidth 640Gb. I doubt we would ever reach that level of
> bandwidth in any random-read or random-write situation and in our small
> random file scenario I think both system would perform equally. Any
> comments ?

> - MTBF: Solution 2 has a definite edge. Some numbers:

> MTBF1= 1 / ( 24* 1/1.2million + 2/1million ) = 45454.54 hours

> Raptor MTBF = 1,200,000 hours; 3Ware MTBF = 1,000,000 hours

> MTBF2= 1 / ( 12* 1/1.2million ) = 100,000 hours

> Not surprisingly Solution 2 is twice as reliabe. This doesn't take
> into account the novelty of the SATA Raptor drive and the proven track
> record of the SCSI solution. In any case comments on this MTBF point
> are welcomed.

> - RAID Performance: I am not sure about this. In principle both
> solution should behave the same since we are using SW RAID but I don't
> know how the fact that SCSI is a bus with overhead would affect RAID
> performance ? What do you think ? Any ideas as to how to spread the
> RAID 10 in a dual U 320 SCSI Scenario ?
> SATA being Point-To-Point appears to have an edge again but your
> thoghts are welcomed.

> - Would I get a considerable edge if I used 15k SCSI Drives ? I am not
> totally convinced that the SATA is our best choice. Any help is greatly
> appreciated.

One thing you can be relatively sure of is that the SCSI controller
will work well with the mainboard. Also Linux has a long history of
supporting SCSI, while SATA support is new and still being worked on.

For you access scenario, SCSI will also be superior, since SCSI
has supported command queuing for a long time.

I also would not trust the Raptors as I would trust SCSI drives.
The SCSI manufacturers know that SCSI customers expect high
reliability, while the Raptor is more a poor man's race car.

One more argument: You can put Config 2 on a 550W (redundant)
PSU, while Config 1 will need something significantly larger,
also because SATA does not support staggered start-up, while
SCSI does. Is that already factored into the cost?

Arno
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

Arno Wagner wrote:

> In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:
>> Hello all,
>
>> Which of the two following architectures would you choose for a
>> high-perf NFS server in a cluster env. Most of our data ( 80% ) is
>> small ( < 64 kb ) files. Reads and Writes are similar and mostly random
>> in nature:
>
>> Architecture 1:
>> Tyan 2882
>> 2xOpteron 246
>> 4 GB RAM
>> 2x80Gb SATA ( System )
>> 2x12-Way 3Ware Cards
>> 24 73 GB 10k rpm Western Digital Raptors
>> Software RAID 10 on Linux 2.6.x
>> XFS
>
>> Architecture 2:
>> Tyan 2881 with Dual U-320 SCSI
>> 2xOpteron 246
>> 4 GB RAM
>> 2x80Gb SATA ( System )
>> 12x146Gb Fujitsu 10k SCSI
>> Software RAID 10 on Linux
>> XFS
>
>> The price for both system is almost the same. Considerations:
>
>> - Number of Spindles: Solution 1 looks like it might have an edge here
>> for small sequential reads and writes since there are just twice as
>> many spindles.
>
>> - PCI Bus Saturation: Solution 1 also appears to have an edge in case
>> we use large sequential reads. Solution 2 would be limited by the Dual
>> SCSI bus bandwidth 640Gb. I doubt we would ever reach that level of
>> bandwidth in any random-read or random-write situation and in our small
>> random file scenario I think both system would perform equally. Any
>> comments ?
>
>> - MTBF: Solution 2 has a definite edge. Some numbers:
>
>> MTBF1= 1 / ( 24* 1/1.2million + 2/1million ) = 45454.54 hours
>
>> Raptor MTBF = 1,200,000 hours; 3Ware MTBF = 1,000,000 hours
>
>> MTBF2= 1 / ( 12* 1/1.2million ) = 100,000 hours
>
>> Not surprisingly Solution 2 is twice as reliabe. This doesn't take
>> into account the novelty of the SATA Raptor drive and the proven track
>> record of the SCSI solution. In any case comments on this MTBF point
>> are welcomed.
>
>> - RAID Performance: I am not sure about this. In principle both
>> solution should behave the same since we are using SW RAID but I don't
>> know how the fact that SCSI is a bus with overhead would affect RAID
>> performance ? What do you think ? Any ideas as to how to spread the
>> RAID 10 in a dual U 320 SCSI Scenario ?
>> SATA being Point-To-Point appears to have an edge again but your
>> thoghts are welcomed.
>
>> - Would I get a considerable edge if I used 15k SCSI Drives ? I am not
>> totally convinced that the SATA is our best choice. Any help is greatly
>> appreciated.
>
> One thing you can be relatively sure of is that the SCSI controller
> will work well with the mainboard. Also Linux has a long history of
> supporting SCSI, while SATA support is new and still being worked on.

If he's using 3ware host adapters then "SATA support" is not an
issue--that's handled by the processor on the host adapter and all that the
Linux driver does is give commands to that processor.

Do you have any evidence to present that suggests that 3ware RAID
controllers have problems with any known mainboard?

> For you access scenario, SCSI will also be superior, since SCSI
> has supported command queuing for a long time.

I'm sorry, but it doesn't follow that because SCSI has supported command
queuing for a long time that the performance will be superior.

> I also would not trust the Raptors as I would trust SCSI drives.
> The SCSI manufacturers know that SCSI customers expect high
> reliability, while the Raptor is more a poor man's race car.

Actually a Raptor is an enterprise SCSI drive with an SATA chip on it
instead of a SCSI chip on it. The Raptors aren't "poor man's" _anything_,
they're Western Digital's enterprise drive. WD has chosen to take a risk
and make their enterprise line with SATA instead of SCSI. Are you
suggesting that WD is incapable of producing a reliable drive?

If it was a Seagate Cheetah with an SATA chip would you say that it was
going to be unreliable?

> One more argument: You can put Config 2 on a 550W (redundant)
> PSU, while Config 1 will need something significantly larger,
> also because SATA does not support staggered start-up, while
> SCSI does. Is that already factored into the cost?

Uh, SATA requires one host interface for each drive. Whatever processor is
controlling those host interfaces can most assuredly stagger the startup if
that is an issue.

Not saying that SCSI is not the superior solution but the reasons given seem
to be ignoring the fact that a "smart" SATA RAID controller is being
compared with a "dumb" SCSI setup.

> Arno

--
--John
to email, dial "usenet" and validate
(was jclarke at eye bee em dot net)
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

lmanna@gmail.com wrote:
> Hello all,
>
> Which of the two following architectures would you choose for a
> high-perf NFS server in a cluster env. Most of our data ( 80% ) is
> small ( < 64 kb ) files. Reads and Writes are similar and mostly
> random in nature:

I wouldn't use either one of them since your major flaw would be using an
Opteron when you should only be using Xeon or Itanium2 processors. Now, if
you are just putting an MP3 server in the basement of your home for
light-duty work you can squeak by with the Opterons. As for the drives, I
would only use SCSI in the system you mention.



Rita
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

On 26 Mar 2005 01:01:12 -0800, lmanna@gmail.com wrote:


>> also because SATA does not support staggered start-up, while
>> SCSI does. Is that already factored into the cost?
>
> This I don't follow, what's staggered start-up ?
>

It is a feature that staggers the spinup of each disk sequentially
leaving enough time between disk starts to prevent overloading the
power supply. I think he meant that because he believed SATA does not
do this you would need a beefier power supply than you would with the
scsi setup to avoid problems on powerup.

AFAIK delay start or staggered spinup (whatever you want to call it)
is available on SATA but it is controller specific (& most don't
support it) and it is not a standard feature like the delay start &
remote start jumpers on scsi drives & backplanes.
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

Opteron is not a processor to be taken seriously ???? Any backing
with hard numbers for what you're saying ? We have a whole 64-node dual
opteron cluster running 64-bit applications for more than a year and
it's been not only reliable but given the nature of our applications
crucial in a time when Intel was sleeping in their 32-bit laurels and
convincing the industry and neophytes that 64-bit equals Itanium only.
I applaud AMD for their screw-intel approach giving floks like us a
great cost-effective 64 bit option. If the Opteron wasn't succesfull
Intel would have never come up with the 64-bit Xeon, their mantra would
have been "Buy Itanium". Have you tried to cost out a 64-node dual
Itanic lately ?? Moreover, our current file-servers are Xeon based and
we don't feel confident on their running 64-bit OS and/or XFS.

The only consideration I had for the Xeons was their wider choice of
mobo availability, and the new boards with 4x, 8x and 16x PCI-Express
options which might prevent PCI bus saturation in some extreme video
streaming or large sequential reads applications, which is not the case
in our scenario. You might also need 10GB ethernet to cope with such
data stream.

Parsifal
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

In comp.sys.ibm.pc.hardware.storage J. Clarke <jclarke.usenet@snet.net.invalid> wrote:
> Arno Wagner wrote:

>> In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:
>>> Hello all,
[...]
>> One thing you can be relatively sure of is that the SCSI controller
>> will work well with the mainboard. Also Linux has a long history of
>> supporting SCSI, while SATA support is new and still being worked on.

> If he's using 3ware host adapters then "SATA support" is not an
> issue--that's handled by the processor on the host adapter and all that the
> Linux driver does is give commands to that processor.

> Do you have any evidence to present that suggests that 3ware RAID
> controllers have problems with any known mainboard?

No. I was mostly thinking of SMART support, which is not there
for SATA on Linux (unless you use the old IDE driver). Normal disk
access works fine in my experience.

>> For you access scenario, SCSI will also be superior, since SCSI
>> has supported command queuing for a long time.

> I'm sorry, but it doesn't follow that because SCSI has supported command
> queuing for a long time that the performance will be superior.

Actually for small reads command queuing helps massively. The
"has been available for a long time" just means that it will work.

>> I also would not trust the Raptors as I would trust SCSI drives.
>> The SCSI manufacturers know that SCSI customers expect high
>> reliability, while the Raptor is more a poor man's race car.

> Actually a Raptor is an enterprise SCSI drive with an SATA chip on it
> instead of a SCSI chip on it. The Raptors aren't "poor man's" _anything_,
> they're Western Digital's enterprise drive. WD has chosen to take a risk
> and make their enterprise line with SATA instead of SCSI. Are you
> suggesting that WD is incapable of producing a reliable drive?

I am suggesting that WDs strategy is suspicious. It may be up
to SCSI standards, but I have doubts. SATA is far to new to compete
with SCSI on reliability and compatibility. And SCSI has a lot of
features working for decades now that are still being implemented
or are being planned for SATA.

> If it was a Seagate Cheetah with an SATA chip would you say that it was
> going to be unreliable?

At least not as reliable as SCSI. The whole SATA technology is not as
mature as SCSI is. It is also not as well designed.

>> One more argument: You can put Config 2 on a 550W (redundant)
>> PSU, while Config 1 will need something significantly larger,
>> also because SATA does not support staggered start-up, while
>> SCSI does. Is that already factored into the cost?

> Uh, SATA requires one host interface for each drive. Whatever processor is
> controlling those host interfaces can most assuredly stagger the startup if
> that is an issue.

The problem is that most (all?) SATA disks start themselves, while
in SCSI that is usually a jumper-option. Typical is auto-start,
auto-start with a selectable delay and no auto-start. On SATA
you would have to to staggered power or the like to get the same
effect.

> Not saying that SCSI is not the superior solution but the reasons
> given seem to be ignoring the fact that a "smart" SATA RAID
> controller is being compared with a "dumb" SCSI setup.

Not really. It is more a relatively new, supposedly smart technology
against a proven, older, reliable, knowen to be smart technology.
SCSI targets are really quite smart, while SATA targets are not too
bright. The 3ware controllers may help some, but I doubt they
can do that much.

In addition the kernel knows how to talk to SCSI targets, while SATA is
still in flux. Data transfer on SATA works, but everything else is
still being worked on, like SMART support.

The RAID logic is pretty smart in both cases, since done by the
kernel, but when having this many disks you _will_ want to
poll defective lists/counts. drive temperature and the like
periodically to get early warnings.

Arno
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

Arno Wagner wrote:

> In comp.sys.ibm.pc.hardware.storage J. Clarke
> <jclarke.usenet@snet.net.invalid> wrote:
>> Arno Wagner wrote:
>
>>> In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:
>>>> Hello all,
> [...]
>>> One thing you can be relatively sure of is that the SCSI controller
>>> will work well with the mainboard. Also Linux has a long history of
>>> supporting SCSI, while SATA support is new and still being worked on.
>
>> If he's using 3ware host adapters then "SATA support" is not an
>> issue--that's handled by the processor on the host adapter and all that
>> the Linux driver does is give commands to that processor.
>
>> Do you have any evidence to present that suggests that 3ware RAID
>> controllers have problems with any known mainboard?
>
> No. I was mostly thinking of SMART support, which is not there
> for SATA on Linux (unless you use the old IDE driver). Normal disk
> access works fine in my experience.

Actually, that would be a function of the 3ware drivers. With a 3ware host
adapter you do not use the SATA drivers, you use drivers specific to 3ware,
and the 3ware drivers _do_ support SMART under Linux.

>>> For you access scenario, SCSI will also be superior, since SCSI
>>> has supported command queuing for a long time.
>
>> I'm sorry, but it doesn't follow that because SCSI has supported command
>> queuing for a long time that the performance will be superior.
>
> Actually for small reads command queuing helps massively. The
> "has been available for a long time" just means that it will work.

So where is the evidence that SCSI command queuing works better for small
reads than does SATA command queuing? In the absence of other evidence one
might assume that SATA command queuing benefits from "lessons learned" with
SCSI.

>>> I also would not trust the Raptors as I would trust SCSI drives.
>>> The SCSI manufacturers know that SCSI customers expect high
>>> reliability, while the Raptor is more a poor man's race car.
>
>> Actually a Raptor is an enterprise SCSI drive with an SATA chip on it
>> instead of a SCSI chip on it. The Raptors aren't "poor man's"
>> _anything_,
>> they're Western Digital's enterprise drive. WD has chosen to take a risk
>> and make their enterprise line with SATA instead of SCSI. Are you
>> suggesting that WD is incapable of producing a reliable drive?
>
> I am suggesting that WDs strategy is suspicious.

Why? They see SATA as the coming thing. Are you suggesting that Western
Digital is incapable of producing a SCSI drive?

> It may be up
> to SCSI standards, but I have doubts. SATA is far to new to compete
> with SCSI on reliability

Reliability in a disk is primarily a function of the mechanical components,
not the interface. It is quite possible to put a bridge-chip on a Cheetah
that carries the existing SCSI interface into an SATA interface. Would
that drive then be less reliable than the Cheetah that was not plugged into
a bridge chip? Or are you suggesting that the state of the art in the
manufacture of integrated circuits is such that for some reason a chip
containing the circuits that support SATA is more likely to fail in service
than one that contains the circuits that support SCSI?

> and compatibility. And SCSI has a lot of
> features working for decades now that are still being implemented
> or are being planned for SATA.

Such as?

>> If it was a Seagate Cheetah with an SATA chip would you say that it was
>> going to be unreliable?
>
> At least not as reliable as SCSI. The whole SATA technology is not as
> mature as SCSI is. It is also not as well designed.

In what specific ways?

>>> One more argument: You can put Config 2 on a 550W (redundant)
>>> PSU, while Config 1 will need something significantly larger,
>>> also because SATA does not support staggered start-up, while
>>> SCSI does. Is that already factored into the cost?
>
>> Uh, SATA requires one host interface for each drive. Whatever processor
>> is controlling those host interfaces can most assuredly stagger the
>> startup if that is an issue.
>
> The problem is that most (all?) SATA disks start themselves,

Raptors have a jumper that selects startup in full power mode or startup in
standby, intended specifically to address this issue.

> while
> in SCSI that is usually a jumper-option. Typical is auto-start,
> auto-start with a selectable delay and no auto-start. On SATA
> you would have to to staggered power or the like to get the same
> effect.

Just tell the drive to come out of standby whenever you are ready.

>> Not saying that SCSI is not the superior solution but the reasons
>> given seem to be ignoring the fact that a "smart" SATA RAID
>> controller is being compared with a "dumb" SCSI setup.
>
> Not really. It is more a relatively new, supposedly smart technology
> against a proven, older, reliable, knowen to be smart technology.
> SCSI targets are really quite smart, while SATA targets are not too
> bright. The 3ware controllers may help some, but I doubt they
> can do that much.

You have made enough statements about SATA that are simply not true that I
wonder at the validity of your assessment.

> In addition the kernel knows how to talk to SCSI targets, while SATA is
> still in flux. Data transfer on SATA works, but everything else is
> still being worked on, like SMART support.

So let's see, you'd favor the use of a brand new LSI Logic SCSI RAID
controller over a brand new LSI Logic SATA RAID controller because "the
kernel knows how to talk to SCSI targets" despite the fact that both
devices use brand new drivers?

You're assuming that all contact with drives is via the SCSI or SATA kernel
drivers and not through a dedicated controller with drivers specific to
that controller.

> The RAID logic is pretty smart in both cases, since done by the
> kernel, but when having this many disks you _will_ want to
> poll defective lists/counts. drive temperature and the like
> periodically to get early warnings.

With the 3ware host adapter, the RAID logic is ON THE BOARD, _not_ in the
kernel.

The same is true for SATA RAID controllers from LSI Logic, Intel, Tekram,
and several other vendors.

> Arno

--
--John
to email, dial "usenet" and validate
(was jclarke at eye bee em dot net)
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

Previously lmanna@gmail.com wrote:

> J. Clarke wrote:
>> Arno Wagner wrote:
[...]
>> Uh, SATA requires one host interface for each drive. Whatever
> processor is
>> controlling those host interfaces can most assuredly stagger the
> startup if
>> that is an issue.
>>
>> Not saying that SCSI is not the superior solution but the reasons
> given seem
>> to be ignoring the fact that a "smart" SATA RAID controller is being
>> compared with a "dumb" SCSI setup.


> Good point. Would the SCSI performance improve if I used a Dual U-320
> super duper SCSI RAID card ? Since the RAID was going to be in SW
> anyways I didn't see the reason of getting such a card. I had no other
> choice with the SATA solution though.

Don't think so. Your set-up will spend most time waiting for seeks
and rotational latency anyways IMO. Maybe if you put the RAID1
mirrors on separate channels that would bring some write speed
improvements.

Arno
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:
> Arno Wagner wrote:
>> In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:

>>
>> One thing you can be relatively sure of is that the SCSI controller
>> will work well with the mainboard. Also Linux has a long history of
>> supporting SCSI, while SATA support is new and still being worked on.
>>
>> For you access scenario, SCSI will also be superior, since SCSI
>> has supported command queuing for a long time.
>>
>> I also would not trust the Raptors as I would trust SCSI drives.
>> The SCSI manufacturers know that SCSI customers expect high
>> reliability, while the Raptor is more a poor man's race car.


> My main concern is their novelty, rather then their performance. Call
> it a hunch but it just doesn't feel right to risk it while there's a
> proven solid SCSI solution for the same price.

>>
>> One more argument: You can put Config 2 on a 550W (redundant)
>> PSU, while Config 1 will need something significantly larger,

> Thanks for your comments. I forgot about the Power. Definitely worth
> considering since we're getting 3 of these servers and UPS sizing
> should also play in the cost equation.

Power is critical to reliability. If you have a PSU with, say
50% normal and 70% peak load, that is massively more reliable than
one with 70%/100%. Also many PSUs die on start-up, since e.g.
disks draw their peak currents on spindle start.

>> also because SATA does not support staggered start-up, while
>> SCSI does. Is that already factored into the cost?

> This I don't follow, what's staggered start-up ?

You can jumper most (all?) SCSI drive do delay their spindle-start.
Spindle start results in a massive amount of poerrt drawn for some
seconds. Maybe as much as 2-3 times the peaks you see during operation.

SCSI drives can be jumperd to spin-up on power-on or on receiving
a start-unit command. Some also support delays. You should be
able to set the SCSI controller to issue the start-unit command
to the drives with, say, 5 seconds delay between each unit or so.
This massively reduces power drawn on start-up.

SATA drives all (?) do spin-up on power-on. It is a problem
when you have many disks. The PSU needs the reserves to deal
with this worst case.

Arno
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

Arno Wagner wrote:

> In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:
>> Arno Wagner wrote:
>>> In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:
>
>>>
>>> One thing you can be relatively sure of is that the SCSI controller
>>> will work well with the mainboard. Also Linux has a long history of
>>> supporting SCSI, while SATA support is new and still being worked on.
>>>
>>> For you access scenario, SCSI will also be superior, since SCSI
>>> has supported command queuing for a long time.
>>>
>>> I also would not trust the Raptors as I would trust SCSI drives.
>>> The SCSI manufacturers know that SCSI customers expect high
>>> reliability, while the Raptor is more a poor man's race car.
>
>
>> My main concern is their novelty, rather then their performance. Call
>> it a hunch but it just doesn't feel right to risk it while there's a
>> proven solid SCSI solution for the same price.
>
>>>
>>> One more argument: You can put Config 2 on a 550W (redundant)
>>> PSU, while Config 1 will need something significantly larger,
>
>> Thanks for your comments. I forgot about the Power. Definitely worth
>> considering since we're getting 3 of these servers and UPS sizing
>> should also play in the cost equation.
>
> Power is critical to reliability. If you have a PSU with, say
> 50% normal and 70% peak load, that is massively more reliable than
> one with 70%/100%. Also many PSUs die on start-up, since e.g.
> disks draw their peak currents on spindle start.
>
>>> also because SATA does not support staggered start-up, while
>>> SCSI does. Is that already factored into the cost?
>
>> This I don't follow, what's staggered start-up ?
>
> You can jumper most (all?) SCSI drive do delay their spindle-start.
> Spindle start results in a massive amount of poerrt drawn for some
> seconds. Maybe as much as 2-3 times the peaks you see during operation.
>
> SCSI drives can be jumperd to spin-up on power-on or on receiving
> a start-unit command. Some also support delays. You should be
> able to set the SCSI controller to issue the start-unit command
> to the drives with, say, 5 seconds delay between each unit or so.
> This massively reduces power drawn on start-up.
>
> SATA drives all (?) do spin-up on power-on. It is a problem
> when you have many disks. The PSU needs the reserves to deal
> with this worst case.

Would you do the world a favor and actually take ten minutes to research
your statements before you make them? All SATA drives sold as "enterprise"
drives have the ability to perform staggered spinup.

> Arno

--
--John
to email, dial "usenet" and validate
(was jclarke at eye bee em dot net)
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

In comp.sys.ibm.pc.hardware.storage "Rita Ä Berkowitz" <ritaberk2O04 @aol.com> wrote:
> lmanna@gmail.com wrote:
>> Hello all,
>>
>> Which of the two following architectures would you choose for a
>> high-perf NFS server in a cluster env. Most of our data ( 80% ) is
>> small ( < 64 kb ) files. Reads and Writes are similar and mostly
>> random in nature:

> I wouldn't use either one of them since your major flaw would be using an
> Opteron when you should only be using Xeon or Itanium2 processors.

Sorry, but that is BS. Itanium is mostly dead technology and not
really developed anymore. It is also massively over-priced. Xeons are
sort of not-quite 64 bit CPUs, that have the main characteristic of
being Intel and expensive.

I also know of no indications (except marketing BS by Intel) that
Opterons are unreliable.

Arno
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

Arno Wagner wrote:

> Sorry, but that is BS. Itanium is mostly dead technology and not
> really developed anymore. It is also massively over-priced. Xeons are
> sort of not-quite 64 bit CPUs, that have the main characteristic of
> being Intel and expensive.

You need to catch up with the times. You are correct about the original
Itaniums being dogs, but I'm talking about the new Itanium2 processors,
which are also 64-bit. As for Intel being expensive, you get what you pay
for. The new Itanium2 sytems are SWEEEEEEET!

> I also know of no indications (except marketing BS by Intel) that
> Opterons are unreliable.

It's being proven in the field daily. You simple don't see Opteron based
solutions being deployed by major commercial and governmental entities.
True, there are a few *novelty* systems that use many Opteron processors,
but they are merely a curiosity than the mainstream norm. That said, if I
wanted a dirt-cheap gaming system I would opt for an Opteron based SATA box.





Rita
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

In comp.sys.ibm.pc.hardware.storage "Rita Ä Berkowitz" <ritaberk2O04 @aol.com> wrote:
> Arno Wagner wrote:

>> Sorry, but that is BS. Itanium is mostly dead technology and not
>> really developed anymore. It is also massively over-priced. Xeons are
>> sort of not-quite 64 bit CPUs, that have the main characteristic of
>> being Intel and expensive.

> You need to catch up with the times. You are correct about the original
> Itaniums being dogs, but I'm talking about the new Itanium2 processors,
> which are also 64-bit. As for Intel being expensive, you get what you pay
> for. The new Itanium2 sytems are SWEEEEEEET!

You recommend a _new_ product for its reliability????
I don't think I need to comment on that.

>> I also know of no indications (except marketing BS by Intel) that
>> Opterons are unreliable.

> It's being proven in the field daily. You simple don't see Opteron based
> solutions being deployed by major commercial and governmental entities.

Which is a direct result of Intels FUD and behind-the-scenes politics.
In order to prove that something is unreliable it has to be used and
fail. It being not used does not indicate unreliability. It just
indicates "nobody gets fired for buying Intel".

So nothing is actually proven about reliability (or lack of)
of Opterons in the field.

> True, there are a few *novelty* systems that use many Opteron
> processors, but they are merely a curiosity than the mainstream
> norm. That said, if I wanted a dirt-cheap gaming system I would opt
> for an Opteron based SATA box.

That is certainly true. As allways the question is to get the
right balance for a specific application. If you have the money
to buy the most expensive solution _and_ the clout to make the
vendor not just rip you off, you certainly will get an andequate
solution. But you will pay too much. Not all of us can afford
to buy stuff the way the military does.

Arno
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

Rita Ä Berkowitz wrote:

[nothing very significant]

One really needs hip-boots to wade through the manure of these last few
posts.

1. Opteron systems have reliability comparable to Xeon systems, and if
they lag Itanics by any margin at all it's not by much (Itanics do have
a couple of additional internal RAS features that Opterons and Xeons
lack, but the differences are not major ones).

2. While Intel didn't do as excellent a job of adding 64-bit support to
Xeons as AMD did with AMD64, once again the difference is not a dramatic
one.

3. The first Itanic wasn't just a dog, it was an absolute joke.
McKinley and Madison are much more respectable but still consume
inordinate amounts of power and are in general not performance-leading
products: while the newest Madisons managed to regain a very small lead
in SPECfp over POWER5 that's the only major benchmark they lead in (at
least where the competition has bothered to show up: HP has done a fine
job of carefully selecting specific benchmark niches which lacked such
competition, though been a bit embarrassed in cases where it
subsequently appeared), and Itanic often winds up not in second place
but in third or even fourth behind POWER (not just POWER5 but often
behind POWER4+ as well in commercial benchmarks), Opteron, Xeon, and/or
SPARC64 - and for a year or so the top-of-the-line 1.5 GHz Madisons
couldn't even beat the aging and orphaned previous-process-generation
Alpha in SAP SD 2-tier, though they're now a bit ahead of it (this was
the only commercial benchmark HP was willing to allow EV7 to compete in:
it made Itanic look bad, but they needed it to beat the POWER4 score
there).

And that's for benchmarks, where the code has been profiled and
optimized to within an inch of its life. Itanic is more dependent on
such optimization to achieve a given level of performance than its more
flexible out-of-order competition is, and hence falls farther behind
their performance levels in real-world situations where much code is not
so optimized.

4. Nonetheless, Itanic is not an abandoned product. While its eventual
success or failure is still to be determined, Intel is at least
currently still pouring money, engineers, and time into it (though
apparently not at quite the rate it was earlier: in the past year it's
cut a new Itanic chipset from its plans which would have allowed faster
bus speeds and axed a new Itanic core that the transplanted Alpha team
was building for 2007, and what those engineers are now working may or
not be Itanic-related).

- bill
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

Previously J. Clarke <jclarke.usenet@snet.net.invalid> wrote:
> Arno Wagner wrote:

>> In comp.sys.ibm.pc.hardware.storage J. Clarke
>> <jclarke.usenet@snet.net.invalid> wrote:
>>> Arno Wagner wrote:
>>
>>>> In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:
>>>>> Hello all,
>> [...]
>>>> One thing you can be relatively sure of is that the SCSI controller
>>>> will work well with the mainboard. Also Linux has a long history of
>>>> supporting SCSI, while SATA support is new and still being worked on.
>>
>>> If he's using 3ware host adapters then "SATA support" is not an
>>> issue--that's handled by the processor on the host adapter and all that
>>> the Linux driver does is give commands to that processor.
>>
>>> Do you have any evidence to present that suggests that 3ware RAID
>>> controllers have problems with any known mainboard?
>>
>> No. I was mostly thinking of SMART support, which is not there
>> for SATA on Linux (unless you use the old IDE driver). Normal disk
>> access works fine in my experience.

> Actually, that would be a function of the 3ware drivers. With a 3ware host
> adapter you do not use the SATA drivers, you use drivers specific to 3ware,
> and the 3ware drivers _do_ support SMART under Linux.

And, does that work reliably and with the usual Linux tools,
i.e. smartctl? Would kind of surprise me, since libata does
not have smart support at all at the moment, since the ATA
passthru opcodes have only very recently be defined by the
SCSI T10 committee.

>>>> For you access scenario, SCSI will also be superior, since SCSI
>>>> has supported command queuing for a long time.
>>
>>> I'm sorry, but it doesn't follow that because SCSI has supported command
>>> queuing for a long time that the performance will be superior.
>>
>> Actually for small reads command queuing helps massively. The
>> "has been available for a long time" just means that it will work.

> So where is the evidence that SCSI command queuing works better for small
> reads than does SATA command queuing?

At the moment there is no SATA command queuing under Linux, as you
can quickly discover by looking at the Serial ATA (SATA) Linux
software status report page here:

http://linux.yyz.us/sata/software-status.html

I was not saying that SATA queuing is worse. I was saying (or intended to)
that SCSI has command queuing under Linux while SATA does not currently.

[...]
>> I am suggesting that WDs strategy is suspicious.

> Why? They see SATA as the coming thing. Are you suggesting that Western
> Digital is incapable of producing a SCSI drive?

I am suggesting that WD is trying to create a market beween ATA
and SCSI by claiming to be as good as SCSI with SATA prices. If
it sounds to good to be true, it probably is.

>> It may be up
>> to SCSI standards, but I have doubts. SATA is far to new to compete
>> with SCSI on reliability

> Reliability in a disk is primarily a function of the mechanical components,
> not the interface.

It is a driver and software questtion with newer interfaces as well.
I had numerous problems with SATA under Linux.

[...]
> Raptors have a jumper that selects startup in full power mode or startup in
> standby, intended specifically to address this issue.

Good. And does the 3ware controllers support staggered starts?

>> while
>> in SCSI that is usually a jumper-option. Typical is auto-start,
>> auto-start with a selectable delay and no auto-start. On SATA
>> you would have to to staggered power or the like to get the same
>> effect.

> Just tell the drive to come out of standby whenever you are ready.

That should be sometheing the controller and the drive do. Id
the OS does it, it can fail in numerous interessting ways.

>>> Not saying that SCSI is not the superior solution but the reasons
>>> given seem to be ignoring the fact that a "smart" SATA RAID
>>> controller is being compared with a "dumb" SCSI setup.
>>
>> Not really. It is more a relatively new, supposedly smart technology
>> against a proven, older, reliable, knowen to be smart technology.
>> SCSI targets are really quite smart, while SATA targets are not too
>> bright. The 3ware controllers may help some, but I doubt they
>> can do that much.

> You have made enough statements about SATA that are simply not true that I
> wonder at the validity of your assessment.

Of course you are free to do that. But I have 4TB or RAIDed storage
under Linux, about half of which is SATA. And I did run in the problems
I describe here.

>> In addition the kernel knows how to talk to SCSI targets, while SATA is
>> still in flux. Data transfer on SATA works, but everything else is
>> still being worked on, like SMART support.

> So let's see, you'd favor the use of a brand new LSI Logic SCSI RAID
> controller over a brand new LSI Logic SATA RAID controller because "the
> kernel knows how to talk to SCSI targets" despite the fact that both
> devices use brand new drivers?

You are talking about the LL drivers. There is an SCSI abstraction
layer in the kernel as well as an SATA abstraction layer. The former
is stable, proven and full-featured. The latter is pretty basic at
the moment.

To quote the maintainer:

Basic Serial ATA support

The "ATA host state machine", the core of the entire driver, is
considered production-stable.

The error handling is very simple, but at this stage that is an
advantage. Error handling code anywhere is inevitably both complex and
sorely under-tested. libata error handling is intentionally
simple. Positives: Easy to review and verify correctness. Never data
corruption. Negatives: if an error occurs, libata will simply send the
error back the block layer. There are limited retries by the block
layer, depending on the type of error, but there is never a bus reset.

> You're assuming that all contact with drives is via the SCSI or SATA kernel
> drivers and not through a dedicated controller with drivers specific to
> that controller.

See above. Also if specific drivers are needed for specific
hardware, they tend to be less reliable because the user-base is
smaller.

>> The RAID logic is pretty smart in both cases, since done by the
>> kernel, but when having this many disks you _will_ want to
>> poll defective lists/counts. drive temperature and the like
>> periodically to get early warnings.

> With the 3ware host adapter, the RAID logic is ON THE BOARD, _not_ in the
> kernel.

Not in the set-up of the OP. You did read that, did you?

Seems to me we have a misunderstanding here. If the OP
wanted to do Hardware-RAID the assessment would look
different.

Arno
 

flux

Distinguished
Nov 7, 2003
12
0
18,510
Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

In article <114b3ubcrc6am5e@news.supernews.com>,
"Rita Ä Berkowitz" <ritaberk2O04 @aol.com> wrote:

> Arno Wagner wrote:
>
> > Sorry, but that is BS. Itanium is mostly dead technology and not
> > really developed anymore. It is also massively over-priced. Xeons are
> > sort of not-quite 64 bit CPUs, that have the main characteristic of
> > being Intel and expensive.
>
> You need to catch up with the times. You are correct about the original
> Itaniums being dogs, but I'm talking about the new Itanium2 processors,
> which are also 64-bit. As for Intel being expensive, you get what you pay
> for. The new Itanium2 sytems are SWEEEEEEET!
>
> > I also know of no indications (except marketing BS by Intel) that
> > Opterons are unreliable.
>
> It's being proven in the field daily. You simple don't see Opteron based
> solutions being deployed by major commercial and governmental entities.
> True, there are a few *novelty* systems that use many Opteron processors,
> but they are merely a curiosity than the mainstream norm. That said, if I
> wanted a dirt-cheap gaming system I would opt for an Opteron based SATA box.

April Fool's a week early?
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

In comp.sys.ibm.pc.hardware.storage J. Clarke <jclarke.usenet@snet.net.invalid> wrote:
> Arno Wagner wrote:

>> In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:
>>> Arno Wagner wrote:
>>>> In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:
>>
>>>>
>>>> One thing you can be relatively sure of is that the SCSI controller
>>>> will work well with the mainboard. Also Linux has a long history of
>>>> supporting SCSI, while SATA support is new and still being worked on.
>>>>
>>>> For you access scenario, SCSI will also be superior, since SCSI
>>>> has supported command queuing for a long time.
>>>>
>>>> I also would not trust the Raptors as I would trust SCSI drives.
>>>> The SCSI manufacturers know that SCSI customers expect high
>>>> reliability, while the Raptor is more a poor man's race car.
>>
>>
>>> My main concern is their novelty, rather then their performance. Call
>>> it a hunch but it just doesn't feel right to risk it while there's a
>>> proven solid SCSI solution for the same price.
>>
>>>>
>>>> One more argument: You can put Config 2 on a 550W (redundant)
>>>> PSU, while Config 1 will need something significantly larger,
>>
>>> Thanks for your comments. I forgot about the Power. Definitely worth
>>> considering since we're getting 3 of these servers and UPS sizing
>>> should also play in the cost equation.
>>
>> Power is critical to reliability. If you have a PSU with, say
>> 50% normal and 70% peak load, that is massively more reliable than
>> one with 70%/100%. Also many PSUs die on start-up, since e.g.
>> disks draw their peak currents on spindle start.
>>
>>>> also because SATA does not support staggered start-up, while
>>>> SCSI does. Is that already factored into the cost?
>>
>>> This I don't follow, what's staggered start-up ?
>>
>> You can jumper most (all?) SCSI drive do delay their spindle-start.
>> Spindle start results in a massive amount of poerrt drawn for some
>> seconds. Maybe as much as 2-3 times the peaks you see during operation.
>>
>> SCSI drives can be jumperd to spin-up on power-on or on receiving
>> a start-unit command. Some also support delays. You should be
>> able to set the SCSI controller to issue the start-unit command
>> to the drives with, say, 5 seconds delay between each unit or so.
>> This massively reduces power drawn on start-up.
>>
>> SATA drives all (?) do spin-up on power-on. It is a problem
>> when you have many disks. The PSU needs the reserves to deal
>> with this worst case.

> Would you do the world a favor and actually take ten minutes to research
> your statements before you make them?

I maked it with a "(?)" as tentative but not sure. Still this is
a newsgroup and you get what you pay for. I also don't think "the
world" reads this group.

> All SATA drives sold as "enterprise"
> drives have the ability to perform staggered spinup.

It is not that easy. Depending on the mechanism, you need controller-BIOS
support or the right type of preconfiguration. Just "supports staggered
start-up" does not cut it, especially on a new product type.

Also, just to show the quality of your "research", I happen to have
found an "enterprise" disk that does not support staggered spin-up in
about 1 second: Maxtor MaxLine II plus. Staggered spin-up is only
in MaxLine III. How do I know? Because I own one of these and read the
documentation! I guess there will be more of them.

In addition I did not find any specification how the staggered spin-up
works on a MaxLine III. Does it need controller support? Is it a SATA II
only feature that does not work with an older controller? Can I jumper
it? Will the controller support be there? With SCSI I know, because it
has been a feature for decades.

Arno
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

In comp.sys.ibm.pc.hardware.storage "Rita Ä Berkowitz" <ritaberk2O04 @aol.com> wrote:
> Arno Wagner wrote:

>>> You need to catch up with the times. You are correct about the
>>> original Itaniums being dogs, but I'm talking about the new Itanium2
>>> processors, which are also 64-bit. As for Intel being expensive,
>>> you get what you pay for. The new Itanium2 sytems are SWEEEEEEET!
>>
>> You recommend a _new_ product for its reliability????
>> I don't think I need to comment on that.

> Oh please, come on now! This is like saying BMW introduces a new car this
> year and it is going to be a failure in the world for using cutting edge
> technology that hasn't a single shred of old technology behind it. When you
> lift the hood you still see the same old internal combustion engine that
> they used for the last 50-years. The difference is they improved
> manufacturing processes and materials to make the product better. They
> didn't redesign the wheel for the sake of doing so.

> Take a new Itanium2 box for a test drive and you'll open your eyes.

Oh, I agree that it is powerful hardware. But you know, I rather
have that 10 machine cluster with 10 times the storage that can actually
do the job than this single, gold-plated big iron.

>>>> I also know of no indications (except marketing BS by Intel) that
>>>> Opterons are unreliable.
>>
>>> It's being proven in the field daily. You simple don't see Opteron
>>> based solutions being deployed by major commercial and governmental
>>> entities.
>>
>> Which is a direct result of Intels FUD and behind-the-scenes politics.
>> In order to prove that something is unreliable it has to be used and
>> fail. It being not used does not indicate unreliability. It just
>> indicates "nobody gets fired for buying Intel".

> Then again, if the box were being used in environments that were life
> dependant such as on the battlefield, reliability is paramount over cost.
> Intel has a proven track record for reliability in the field. I would feel
> safe using an Intel solution over an AMD any day of the week.

So? From what I hear, getting people killed is preferred to
spending lots of money on most battlefields. And if you think
thet CPU reliability is the most important question, then
you cannot have much experience with software.

>> So nothing is actually proven about reliability (or lack of)
>> of Opterons in the field.

> Market share has a great way of defining reliability.

Well, that is complete nonsense. Market share does not define any
technical characteristic. Market share could indicate some technical
problem, but in this instance it does not. It rather signifies
"we have allways bought Intel".

> It would seem that the major players don't feel comfortable betting
> their livelihood on AMD.

So? And what does that indicate exactly, besides that they just
continue to do what they always did, like any large, conservative
organisation? It does not say anything about the technological
quality of Opterons.

>>> True, there are a few *novelty* systems that use many Opteron
>>> processors, but they are merely a curiosity than the mainstream
>>> norm. That said, if I wanted a dirt-cheap gaming system I would opt
>>> for an Opteron based SATA box.
>>
>> That is certainly true. As allways the question is to get the
>> right balance for a specific application. If you have the money
>> to buy the most expensive solution _and_ the clout to make the
>> vendor not just rip you off, you certainly will get an andequate
>> solution. But you will pay too much. Not all of us can afford
>> to buy stuff the way the military does.

> Define "pay too much"? Most people and I would rather pay too much
> upfront instead of being backended with high maintenance and repair
> costs, not to mention the disastrous outcome of total failure.

If that were so, there would be hard numbers about this out there.
Care to give a reference to a technological study that shows
that AMD is less reliable than Intel to a degree that matters?

> Like I said, you get what you pay for. If the military would go
> totally AMD than I would agree with you. Till that day, AMD is not
> a processor to be taken seriously.

As somebody with now perhaps ~10 CPU years actual usage on AMD CPUs
(mostly Athlons) under Linux I cannot agree. I have had troubles, but
not a single problem because of the CPUs.

Arno
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

Arno Wagner wrote:

>> Take a new Itanium2 box for a test drive and you'll open your eyes.
>
> Oh, I agree that it is powerful hardware. But you know, I rather
> have that 10 machine cluster with 10 times the storage that can
> actually
> do the job than this single, gold-plated big iron.

Of course you would, but the majority of commercial and military entities
disagree with you.

>> Then again, if the box were being used in environments that were life
>> dependant such as on the battlefield, reliability is paramount over
>> cost. Intel has a proven track record for reliability in the field.
>> I would feel safe using an Intel solution over an AMD any day of the
>> week.
>
> So? From what I hear, getting people killed is preferred to
> spending lots of money on most battlefields. And if you think
> thet CPU reliability is the most important question, then
> you cannot have much experience with software.

Sorry, software is of no concern to me since that is the other person's
problem. But, then again, there are people whom traditionally blame hardware
related problems on the software. The anti-Microsoft crowd comes to mind.

>>> So nothing is actually proven about reliability (or lack of)
>>> of Opterons in the field.
>
>> Market share has a great way of defining reliability.
>
> Well, that is complete nonsense. Market share does not define any
> technical characteristic. Market share could indicate some technical
> problem, but in this instance it does not. It rather signifies
> "we have allways bought Intel".

Or more desirably, "we always sell Intel" because this is what our customers
that have a clue know what they want.

>> It would seem that the major players don't feel comfortable betting
>> their livelihood on AMD.
>
> So? And what does that indicate exactly, besides that they just
> continue to do what they always did, like any large, conservative
> organisation? It does not say anything about the technological
> quality of Opterons.

But it speaks volumes of the people purchasing the hardware. Not many want
to have egg on their face when the passing fad called the Opteron takes a
dump.

>> Define "pay too much"? Most people and I would rather pay too much
>> upfront instead of being backended with high maintenance and repair
>> costs, not to mention the disastrous outcome of total failure.
>
> If that were so, there would be hard numbers about this out there.
> Care to give a reference to a technological study that shows
> that AMD is less reliable than Intel to a degree that matters?

I only go by what the majority wants and it surely isn't AMD. And most AMD
zealots wouldn't want to look at the hard numbers if they bit them in the
ass.

>> Like I said, you get what you pay for. If the military would go
>> totally AMD than I would agree with you. Till that day, AMD is not
>> a processor to be taken seriously.
>
> As somebody with now perhaps ~10 CPU years actual usage on AMD CPUs
> (mostly Athlons) under Linux I cannot agree. I have had troubles, but
> not a single problem because of the CPUs.

I guess it boils down to your expectations of what you want from any
particular CPU. Like I said, if it's gaming and a simple home based MP3
server for the kiddies than I'll say that AMD is the only choice from a
sheer economics standpoint.



Rita
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

In comp.sys.ibm.pc.hardware.storage flux <support@fluxsoft.com> wrote:
> In article <114b3ubcrc6am5e@news.supernews.com>,
> "Rita Ä Berkowitz" <ritaberk2O04 @aol.com> wrote:

>> Arno Wagner wrote:
>>
>> > Sorry, but that is BS. Itanium is mostly dead technology and not
>> > really developed anymore. It is also massively over-priced. Xeons are
>> > sort of not-quite 64 bit CPUs, that have the main characteristic of
>> > being Intel and expensive.
>>
>> You need to catch up with the times. You are correct about the original
>> Itaniums being dogs, but I'm talking about the new Itanium2 processors,
>> which are also 64-bit. As for Intel being expensive, you get what you pay
>> for. The new Itanium2 sytems are SWEEEEEEET!
>>
>> > I also know of no indications (except marketing BS by Intel) that
>> > Opterons are unreliable.
>>
>> It's being proven in the field daily. You simple don't see Opteron based
>> solutions being deployed by major commercial and governmental entities.
>> True, there are a few *novelty* systems that use many Opteron processors,
>> but they are merely a curiosity than the mainstream norm. That said, if I
>> wanted a dirt-cheap gaming system I would opt for an Opteron based SATA box.

> April Fool's a week early?

Probably suppressed machine rage. I know I have some. But then what
do I know, I use AMD CPUs and cheap drives. Probably deserve all
the problems I have ;-)

Arno
 

TRENDING THREADS