Archived from groups: comp.sys.ibm.pc.hardware.chips (
More info?)
On Thu, 24 Jun 2004 02:04:11 GMT, daytripper
<day_trippr@REMOVEyahoo.com> wrote:
>On Wed, 23 Jun 2004 15:33:43 -0400, Tony Hill <hilla_nospam_20@yahoo.ca>
>wrote:
>
>>On Wed, 23 Jun 2004 18:15:35 GMT, Rob Jellinghaus
>><robj@unrealities.com> wrote:
>>>
>>>Vague rumor has it that the first Athlon 64 PCI Express chipsets will
>>>ship in Q3 sometime. Anyone know any more details than that?
>>
>>VIA, nVidia and SiS all have plans for some Athlon64 chipsets with PCI
>>Express. If my memory serves me correctly, they were all supposed to
>>release their products in May or June, though obviously those dates
>>just aren't going to happen. Q3 seems like a reasonable prediction
>>for the first chips, though I wouldn't expect much from either the
>>Intel-processor or AMD-processor systems until Q4 or even Q1 of 2005.
>>
>>PCI Express really seems to be a solution searching for a problem at
>>the moment, so I doubt that it will be quick to catch on. At best it
>>just seems like a way to unify AGP, PCI, CSA and maybe even PCI-X into
>>a single bus, eventually making things cheaper (though probably not
>>any better/faster). Of course, the new-factor will keep it expensive
>>for a 6-8 months, hence the reason why I doubt we'll see too much PCI
>>Express action this year.
>
>There are sound reasons why PCI Express is a solution for *existing* problems,
Perhaps I should have qualified that I was referring to desktop and
workstations here, ie the sorts of systems that will run Athlon64
processors and the sorts of systems that are being targeted (right
now) by PCI-E.
Given that restriction, just what problems does PCI Express solve?
Graphics with AGP 8x is not particularly bandwidth limited, it's quite
rare that 8x makes any improvement at all over 4x, let alone going
beyond that (in a few years this will change of course). Gigabit
ethernet is one potential area of concern, but it's been pushed onto
CSA for Intel solutions and even hanging off the PCI bus it really
isn't all that bad. Hard drive controllers have been pulled off the
PCI bus. So what does that leave? Sound, USB and the legacy stuff
for the most part.
>made all the better once you factor in cost/performance. And given that PCI-X
>Mode 2 is an utter non-starter, the parallel PCI bus paradigm was quickly
>running out of gas anyway. Time for a paradigm shift.
>
>PCI Express in its *current* incarnation whups PCI-X Mode 1's ass, never mind
>PCI-E 2.0 or beyond. But the cost of connectivity is where the rubber hits the
>road.
>
>How many PCI-X devices can you hang on a bus? Not many.
>So how do you make more PCI or PCI-X buses?
>Bridges and pins. Lots and lots of pins, and conceivably, many bridges.
>
>otoh, PCI Express allows you to dial up some prodigious bandwidth using very
>few pins, which can dramatically cut down the number of silicon chunks on the
>board.
Long term cost definitely does benefit PCI Express, and that is why I
think it will eventually be a good thing. However the short-term cost
just isn't there because of the "first-on-the-block" factor. I don't
see any reason to rush out after PCI Express this year.
>fwiw, I happened to get a tour of a new HP dual Xeon rack mount box recently.
>Had 6 64b PCI-X slots and a single PCI slot, plus the usual assortment of
>embedded PCI devices (graphics, network, server management, legacy IO, etc).
>There were 3 PCIX bridge chips and a PCI bridge chip, on top of the host
>bus/memory bridge. Many bridges, many many pins.
And under a normal configuration, how many of those slots are actually
used? If it's a big disk box it might have a couple of RAID Array
cards in there and probably two gigabit ethernet ports. What else
needs the bandwidth?
>otoh, 3-chip set of a low-end MCH, a PXH, and a 31154 would provide at least
>two PCI Express slots, three 64b/100mhz PCI-X slots, a spot to hang a
>64b/100mhz dual gigabit chip, a couple of 64b/33mhz PCI slots and a hose for
>the integrated SATA, ATA133, VGA, USB2 and server management devices. If you
>needed more PCI Express slots, you can use the beau coup deluxe version of the
>MCH instead. And they'd both be a hell of a lot easier to route.
>
>I'd rather do PCI Express designs.
>They're easier, they're cheaper. What's not to like? ;-)
What's not to like is that there are virtually no PCI Express cards
out there and the first ones that show up carry a price-premium (as
will the first boards and systems).
-------------
Tony Hill
hilla <underscore> 20 <at> yahoo <dot> ca