PCI-Express over Cat6

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

In article <1084204646.576285@teapot.planet.gong>, roo@try-
removing-this.darkboong.demon.co.uk says...
> Andrew Reilly wrote:
> > On Sun, 09 May 2004 03:23:59 +0100, Rupert Pigott wrote:
> >
> >
> >>Andrew Reilly wrote:
> >>
> >>>On Fri, 07 May 2004 21:18:52 +0000, Sander Vesik wrote:
> >>>
> >>>
> >>>
> >>>>In comp.arch Del Cecchi <cecchinospam@us.ibm.com> wrote:
> >>>>
> >>>>>Is latency a big deal writing to a disk or graphics card?
> >>>>>
> >>>>
> >>>>It can easily be for a graphics card.
> >>>
> >>>
> >>>Why? Aren't they write-only devices? Surely any latency
> >>
> >>Off the top of my head, at least two requirements exist, namely
> >>screenshots and flyback sychronisation...
> >
> >
> > Both of which appear, on the surface, to be frame-rate type events: i.e.,
> > in the ballpark of the 10ms event time that I mentioned in the part that
> > you snipped. Not a latency issue on the order of memory access or
>
> Hmmm, how about querying the state of an OpenGL rendering pipeline
> that happens to be sitting on the graphics card ? I don't think that
> it's ever been true to say GFX cards are write only, and I'm not sure
> I'd ever want that. :)

Why wouldn't things be rendered in memory and then DMA'd to the
graphics card? Why would the processor *ever* care what's been
sent to the graphics subsystem? I'm from (close enough to)
Missouri, and you're going to have to show us, Rupert.

--
Keith
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

KR Williams wrote:

> Why wouldn't things be rendered in memory and then DMA'd to the
> graphics card?

Because then the rendering process would be eating system memory
bandwidth.

> Why would the processor *ever* care what's been
> sent to the graphics subsystem?

Because it may have to make decisions based upon that information. I
don't know enough about modern graphics hardware to know if it actually does
this, but it has been at times logical to use the graphics hardware to help
you make decisions about other issues. For example, a game may want to
display some information about an object if and only if that object is
visible to you. That may be the graphics card's decision, since it has to
decide that anyway.

DS
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

KR Williams wrote:
> Ruper Pigott wrote:
>
> > Hmmm, how about querying the state of an OpenGL rendering pipeline
> > that happens to be sitting on the graphics card ? I don't think
> > that it's ever been true to say GFX cards are write only, and I'm
> > not sure I'd ever want that. :)
>
> Why wouldn't things be rendered in memory and then DMA'd to the
> graphics card? Why would the processor *ever* care what's been sent
> to the graphics subsystem?

Why would you have your main processor(s) render a scene when you have a
dedicated graphics processor to do it? In the case of the graphics cores
I've used, reading from the framebuffer is needed to i) make sure the
FIFO has enough spaces for the register writes you're about to issue,
and ii) to synchronize the graphics core and host's usage of video
memory (e.g. so you can reuse a memory buffer once the graphics
operation that was using it has completed). These wouldn't be too
difficult to overcome if interconnect latency become a problem, but as
graphics cards become increasingly flexible there's more useful
information they can provide to the host. Hardware accelerated collision
detection for example.

--
Wishing you good fortune,
--Robin Kay-- (komadori)
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

In comp.sys.ibm.pc.hardware.chips Robin KAY <komadori@myrealbox.com> wrote:
> Why would you have your main processor(s) render a scene
> when you have a dedicated graphics processor to do it?

I think you're talking 3-D while Keith is talking 2-D.

In 3-D there's simply too much drudge work (shading,
perspective) and not enough interaction back to the control
program to need or want the CPU. 2-D is much simpler and often
requires considerable CPU interactivity (CAD) with the display.

-- Robert
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

KR Williams wrote:
> In article <1084204646.576285@teapot.planet.gong>, roo@try-
> removing-this.darkboong.demon.co.uk says...
>
>>Andrew Reilly wrote:
>>
>>>On Sun, 09 May 2004 03:23:59 +0100, Rupert Pigott wrote:
>>>
>>>
>>>
>>>>Andrew Reilly wrote:
>>>>
>>>>
>>>>>On Fri, 07 May 2004 21:18:52 +0000, Sander Vesik wrote:
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>>In comp.arch Del Cecchi <cecchinospam@us.ibm.com> wrote:
>>>>>>
>>>>>>
>>>>>>>Is latency a big deal writing to a disk or graphics card?
>>>>>>>
>>>>>>
>>>>>>It can easily be for a graphics card.
>>>>>
>>>>>
>>>>>Why? Aren't they write-only devices? Surely any latency
>>>>
>>>>Off the top of my head, at least two requirements exist, namely
>>>>screenshots and flyback sychronisation...
>>>
>>>
>>>Both of which appear, on the surface, to be frame-rate type events: i.e.,
>>>in the ballpark of the 10ms event time that I mentioned in the part that
>>>you snipped. Not a latency issue on the order of memory access or
>>
>>Hmmm, how about querying the state of an OpenGL rendering pipeline
>>that happens to be sitting on the graphics card ? I don't think that
>>it's ever been true to say GFX cards are write only, and I'm not sure
>>I'd ever want that. :)
>
>
> Why wouldn't things be rendered in memory and then DMA'd to the
> graphics card? Why would the processor *ever* care what's been
> sent to the graphics subsystem? I'm from (close enough to)
> Missouri, and you're going to have to show us, Rupert.

Try starting here :

http://www.opengl.org

Take a look at the spec. There are numerous papers on OpenGL
acceleration hardware too. FWIW I have been quite impressed by
the OpenGL spec, seems to give a lot of freedom to both the
application and the hardware.

For a more generic non-OpenGL biased look at 3D hardware you
might want to check out the following :

"Computer Graphics Principles and Practice",
2nd Edition by Foley/van Dam/Feiner/Hughes,
published by Addison Wesley.

Specifically chapter 18 "Advanced Raster Graphics Architecture"
for a discussion on various (rather nifty) 3D graphics hardware
and chapter 16 "Illumination and Shading" for a heavy hint as to
why it's necessary.

I can also recommend Jim Blinn's articles in IEEE CG&A, last
time I read them was 1995. The articles I read by Blinn were
focussed on software rendering using approximations that were
"good enough" but still allowed him to get his rendering done
before hell froze over. IIRC Blinn had access to machinary that
would *still* eat a modern PC for breakfast in terms of FP and
memory throughput in those apps.

However I think times are a-changing. Life might well get a lot
more interesting when CPU designers start looking for new things
to do because they can't get any decent speed ups on single
thread execution speed. 😉

Cheers,
Rupert
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

In article <tf4oc.7727$ZX2.6238@newssvr24.news.prodigy.com>,
redelm@ev1.net.invalid says...
> In comp.sys.ibm.pc.hardware.chips Robin KAY <komadori@myrealbox.com> wrote:
> > Why would you have your main processor(s) render a scene
> > when you have a dedicated graphics processor to do it?
>
> I think you're talking 3-D while Keith is talking 2-D.

Nope. 3-D is no different. AGP wuz supposed to make the
graphics channel two-way so the graphics card could access main
memory. DO you know anyone that actually does this? PLease!
With 32MB (or 128MB) on the graphics card, who cares?
>
> In 3-D there's simply too much drudge work (shading,
> perspective) and not enough interaction back to the control
> program to need or want the CPU. 2-D is much simpler and often
> requires considerable CPU interactivity (CAD) with the display.

Sure, so why does the 3-D card want to go back to main memory,
again? The graphics pipe is amazingly one-directional. ...and
thus not sensitive to latency, any more than in human terms.

--
Keith
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

In article <c7pron$bvf$1@nntp.webmaster.com>,
davids@webmaster.com says...
> KR Williams wrote:
>
> > Why wouldn't things be rendered in memory and then DMA'd to the
> > graphics card?
>
> Because then the rendering process would be eating system memory
> bandwidth.

Nope. YOu're thinking so AGP (no one uses it, or ever has).
>
> > Why would the processor *ever* care what's been
> > sent to the graphics subsystem?
>
> Because it may have to make decisions based upon that information. I
> don't know enough about modern graphics hardware to know if it actually does
> this, but it has been at times logical to use the graphics hardware to help
> you make decisions about other issues. For example, a game may want to
> display some information about an object if and only if that object is
> visible to you. That may be the graphics card's decision, since it has to
> decide that anyway.

Nope. Any feedback is certainly within human response. Low
latency (in CPU terms) is irrelevant to graphics subsystems.

--
Keith
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

In article <1084278577.759843@teapot.planet.gong>, roo@try-
removing-this.darkboong.demon.co.uk says...
> KR Williams wrote:
> > In article <1084204646.576285@teapot.planet.gong>, roo@try-
> > removing-this.darkboong.demon.co.uk says...
> >
> >>Andrew Reilly wrote:
> >>
> >>>On Sun, 09 May 2004 03:23:59 +0100, Rupert Pigott wrote:
> >>>
> >>>
> >>>
> >>>>Andrew Reilly wrote:
> >>>>
> >>>>
> >>>>>On Fri, 07 May 2004 21:18:52 +0000, Sander Vesik wrote:
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>>>In comp.arch Del Cecchi <cecchinospam@us.ibm.com> wrote:
> >>>>>>
> >>>>>>
> >>>>>>>Is latency a big deal writing to a disk or graphics card?
> >>>>>>>
> >>>>>>
> >>>>>>It can easily be for a graphics card.
> >>>>>
> >>>>>
> >>>>>Why? Aren't they write-only devices? Surely any latency
> >>>>
> >>>>Off the top of my head, at least two requirements exist, namely
> >>>>screenshots and flyback sychronisation...
> >>>
> >>>
> >>>Both of which appear, on the surface, to be frame-rate type events: i.e.,
> >>>in the ballpark of the 10ms event time that I mentioned in the part that
> >>>you snipped. Not a latency issue on the order of memory access or
> >>
> >>Hmmm, how about querying the state of an OpenGL rendering pipeline
> >>that happens to be sitting on the graphics card ? I don't think that
> >>it's ever been true to say GFX cards are write only, and I'm not sure
> >>I'd ever want that. :)
> >
> >
> > Why wouldn't things be rendered in memory and then DMA'd to the
> > graphics card? Why would the processor *ever* care what's been
> > sent to the graphics subsystem? I'm from (close enough to)
> > Missouri, and you're going to have to show us, Rupert.
>
> Try starting here :
>
> http://www.opengl.org
>
> Take a look at the spec. There are numerous papers on OpenGL
> acceleration hardware too. FWIW I have been quite impressed by
> the OpenGL spec, seems to give a lot of freedom to both the
> application and the hardware.
>
> For a more generic non-OpenGL biased look at 3D hardware you
> might want to check out the following :
>
> "Computer Graphics Principles and Practice",
> 2nd Edition by Foley/van Dam/Feiner/Hughes,
> published by Addison Wesley.
>
> Specifically chapter 18 "Advanced Raster Graphics Architecture"
> for a discussion on various (rather nifty) 3D graphics hardware
> and chapter 16 "Illumination and Shading" for a heavy hint as to
> why it's necessary.

WHy don't you tell us why it's necessary, rather than spewing
some irrelevant web sites. THe fact is that the graphics channel
is amazingly unidirectional. THe processor sends the commands to
the graphics card and it does it's thing in its own memory. AGP
was a wunnerful idea, ten years before it was available.
>
> I can also recommend Jim Blinn's articles in IEEE CG&A, last
> time I read them was 1995. The articles I read by Blinn were
> focussed on software rendering using approximations that were
> "good enough" but still allowed him to get his rendering done
> before hell froze over. IIRC Blinn had access to machinary that
> would *still* eat a modern PC for breakfast in terms of FP and
> memory throughput in those apps.

Oh, my. 1995, there's a recent article. I don't remember. Did
graphics cars have 128MB of their own then? Did systems even
have 128MB. Come on! Get with the times.

> However I think times are a-changing. Life might well get a lot
> more interesting when CPU designers start looking for new things
> to do because they can't get any decent speed ups on single
> thread execution speed. 😉

"They" are. ;-) Though you're still wrong about the graphics
pipe. It really isn't latency sensitive, any more than humans
are.

--
Keith
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

"KR Williams" <krw@att.biz> wrote in message
news:MPG.1b0cafe0e25be65798987e@news1.news.adelphia.net...

> In article <c7pron$bvf$1@nntp.webmaster.com>,
> davids@webmaster.com says...

>> KR Williams wrote:
>>
>> > Why wouldn't things be rendered in memory and then DMA'd to the
>> > graphics card?
>>
>> Because then the rendering process would be eating system memory
>> bandwidth.

> Nope. YOu're thinking so AGP (no one uses it, or ever has).

Okay, then you tell me why things aren't rendered in memory and then
DMA'd to the graphics card.

DS
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

In comp.sys.ibm.pc.hardware.chips KR Williams <krw@att.biz> wrote:
> Nope. 3-D is no different. AGP wuz supposed to make the
> graphics channel two-way so the graphics card could access main
> memory. DO you know anyone that actually does this? PLease!
> With 32MB (or 128MB) on the graphics card, who cares?

I'm not at all sure what point you're trying to make here.
Forgive me if I flounder around a bit. The graphics card
_does_ access main memory. AFAIK, for both 2D & 3D after
rendering in system RAM the CPU programs the GPU to do BM
DMA to load the framebuffer vram.

No-one in their right mind tries to get the CPU to read
the framebuffer. It is dead slow because vram is very busy
being read to satisfy the refresh rate. It is hard enough for
the GPU to access synchonously and this is what the multiple
planes and the MBs of vram are used for.

> Sure, so why does the 3-D card want to go back to main memory,
> again? The graphics pipe is amazingly one-directional. ...and
> thus not sensitive to latency, any more than in human terms.

My understanding is that in 3-D the advanced functions in
the GPU (perspective & shading) can handle quite a number of
intermediate frames before requiring a reload from system ram.
But it does require a reload. How's the graphics card gonna
know what's behind Door Number Three?

-- Robert

>
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

"Robert Redelmeier" <redelm@ev1.net.invalid> wrote in message
news:ihLoc.26799$Rm2.21523@newssvr22.news.prodigy.com...

> In comp.sys.ibm.pc.hardware.chips KR Williams <krw@att.biz> wrote:

>> Nope. 3-D is no different. AGP wuz supposed to make the
>> graphics channel two-way so the graphics card could access main
>> memory. DO you know anyone that actually does this? PLease!
>> With 32MB (or 128MB) on the graphics card, who cares?

> I'm not at all sure what point you're trying to make here.
> Forgive me if I flounder around a bit. The graphics card
> _does_ access main memory. AFAIK, for both 2D & 3D after
> rendering in system RAM the CPU programs the GPU to do BM
> DMA to load the framebuffer vram.

Most current graphics cards render in ram on the graphics card.
Therefore there is no need to DMA the data into the framebuffer, it's as
simple as changing a pointer for where the framebuffer is located in the
graphics card's RAM. This is true for all but the very cheapest graphics
systems today.

> No-one in their right mind tries to get the CPU to read
> the framebuffer. It is dead slow because vram is very busy
> being read to satisfy the refresh rate. It is hard enough for
> the GPU to access synchonously and this is what the multiple
> planes and the MBs of vram are used for.

Right. Typically the CPU doesn't read the texture memory either and the
textures only cross the system memory or AGP bus once, to get loaded into
the graphic's card's RAM. From there thay are applied and rendered wholly on
the graphic's card's internal bus.

>> Sure, so why does the 3-D card want to go back to main memory,
>> again? The graphics pipe is amazingly one-directional. ...and
>> thus not sensitive to latency, any more than in human terms.

> My understanding is that in 3-D the advanced functions in
> the GPU (perspective & shading) can handle quite a number of
> intermediate frames before requiring a reload from system ram.
> But it does require a reload. How's the graphics card gonna
> know what's behind Door Number Three?

That I don't know the answer to. Can the graphics card say, "this item
is visible, I need more details about it"? I don't think so. I think the
decision of what might be visible is made by the main processor and it must
tell the graphics card about every object or that object will not be
rendered.

DS
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

KR Williams wrote:

[SNIP]

> WHy don't you tell us why it's necessary, rather than spewing
> some irrelevant web sites. THe fact is that the graphics channel

OpenGL.org is hardly irrelevent with respect to 3D apps and
hardware. :/

> is amazingly unidirectional. THe processor sends the commands to
> the graphics card and it does it's thing in its own memory. AGP

No, the fact is : It isn't. I've given you some broad reasons
and I've given you some hints on where to start finding some
specifics.

[SNIP]

> "They" are. ;-) Though you're still wrong about the graphics
> pipe. It really isn't latency sensitive, any more than humans
> are.

As long as you consider sites like opengl.org to be irrelevant
you will continue to think that way regardless of what the
reality is.

Cheers,
Rupert
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

In article <c807u2$5ri$1@nntp.webmaster.com>,
davids@webmaster.com says...
>
> "Robert Redelmeier" <redelm@ev1.net.invalid> wrote in message
> news:ihLoc.26799$Rm2.21523@newssvr22.news.prodigy.com...
>
> > In comp.sys.ibm.pc.hardware.chips KR Williams <krw@att.biz> wrote:
>
> >> Nope. 3-D is no different. AGP wuz supposed to make the
> >> graphics channel two-way so the graphics card could access main
> >> memory. DO you know anyone that actually does this? PLease!
> >> With 32MB (or 128MB) on the graphics card, who cares?
>
> > I'm not at all sure what point you're trying to make here.
> > Forgive me if I flounder around a bit. The graphics card
> > _does_ access main memory. AFAIK, for both 2D & 3D after
> > rendering in system RAM the CPU programs the GPU to do BM
> > DMA to load the framebuffer vram.
>
> Most current graphics cards render in ram on the graphics card.
> Therefore there is no need to DMA the data into the framebuffer, it's as
> simple as changing a pointer for where the framebuffer is located in the
> graphics card's RAM. This is true for all but the very cheapest graphics
> systems today.

Exactly. AGP was an idea that was obsolete by the time it was
implemented. Memory is *cheap*.

--
Keith
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

In article <c7veef$n51$1@nntp.webmaster.com>,
davids@webmaster.com says...
>
> "KR Williams" <krw@att.biz> wrote in message
> news:MPG.1b0cafe0e25be65798987e@news1.news.adelphia.net...
>
> > In article <c7pron$bvf$1@nntp.webmaster.com>,
> > davids@webmaster.com says...
>
> >> KR Williams wrote:
> >>
> >> > Why wouldn't things be rendered in memory and then DMA'd to the
> >> > graphics card?
> >>
> >> Because then the rendering process would be eating system memory
> >> bandwidth.
>
> > Nope. YOu're thinking so AGP (no one uses it, or ever has).
>
> Okay, then you tell me why things aren't rendered in memory and then
> DMA'd to the graphics card.

Are you slow? They're "rendered" IN THE GRAPHICS CARD'S MEMORY.
Sheesh!

--
Keith
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

In comp.sys.ibm.pc.hardware.chips KR Williams <krw@att.biz> wrote:
> Exactly. AGP was an idea that was obsolete by the time it was
> implemented. Memory is *cheap*.

OK, so stick the graphics card on PCI and free up that AGP
for a gigabit adapter. They normally saturate PCI around
35 MByte/s. Limited burst length prevents achieving the
theoretical PCI 33/32 throughput of 133 MB/s. Gigabit needs
125 MB/s each way.

-- Robert

>
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

In article <vgWoc.8474$4V7.5557@newssvr24.news.prodigy.com>,
redelm@ev1.net.invalid says...
> In comp.sys.ibm.pc.hardware.chips KR Williams <krw@att.biz> wrote:
> > Exactly. AGP was an idea that was obsolete by the time it was
> > implemented. Memory is *cheap*.
>
> OK, so stick the graphics card on PCI and free up that AGP
> for a gigabit adapter. They normally saturate PCI around
> 35 MByte/s. Limited burst length prevents achieving the
> theoretical PCI 33/32 throughput of 133 MB/s. Gigabit needs
> 125 MB/s each way.

To reasons. Marketing: AGP is a tick-box for graphics. PCI is
anti-tick-box.

Why even bother? Put the GBE on the HT link (other side of the
bridge)! PCI is just sooo, 90s! ;-)


--
Keith
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

"KR Williams" <krw@att.biz> wrote in message
news:MPG.1b0df998e8b661c7989886@news1.news.adelphia.net...
> In article <c7veef$n51$1@nntp.webmaster.com>,
> davids@webmaster.com says...
>>
>> "KR Williams" <krw@att.biz> wrote in message
>> news:MPG.1b0cafe0e25be65798987e@news1.news.adelphia.net...
>>
>> > In article <c7pron$bvf$1@nntp.webmaster.com>,
>> > davids@webmaster.com says...
>>
>> >> KR Williams wrote:
>> >>
>> >> > Why wouldn't things be rendered in memory and then DMA'd to the
>> >> > graphics card?
>> >>
>> >> Because then the rendering process would be eating system memory
>> >> bandwidth.
>>
>> > Nope. YOu're thinking so AGP (no one uses it, or ever has).
>>
>> Okay, then you tell me why things aren't rendered in memory and then
>> DMA'd to the graphics card.
>
> Are you slow? They're "rendered" IN THE GRAPHICS CARD'S MEMORY.
> Sheesh!

Yes, but *WHY*? Do you have a reading comprehension problem?

Let's start over. I was answering the question "Why wouldn't things be
rendered in memory and then DMA'd to the graphics card?". My answer was
"Because then the rendering process would be eating system mrmory
bandwidth". You said "Nope. You're thinking of AGP." So I said, "Okay, then
you tell me why things aren't rendered in memory and then DMA'd to the
graphics card".

So, if the answer "because then the rendering process would be eating
system memory bandwidth" is wrong, then please tell me *WHY* are the
rendered in the graphics card's memory? Why even have memory on the graphics
card at all?

Could it be because then the rendering process would be eating system
memory bandwidth? Just like I've been saying all along?!

DS
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

On Thu, 13 May 2004 02:17:12 -0700, "David Schwartz"
<davids@webmaster.com> wrote:

>> Nope. YOu're thinking so AGP (no one uses it, or ever has).
>
> Okay, then you tell me why things aren't rendered in memory and then
>DMA'd to the graphics card.

Erm, I'm no expert on graphics card... seeing that I have no need for
the latest & greatest. But reading the usual webzines/sites on new
stuff generally gives me the idea that the processor nowadays handles
setting up each scene as objects in a 3D space and then shoots these
to the GPU. The GPU then figure out how to put textures and other
effects on the objects and render the scene in local buffer. Then it
displays out.

Used to be the CPU has to do a lot of these stuff, but there came
along 3D GPU which started with basic stuff, then goes on to do
Transform & Lighting effects, then pixel shading and stuff (latest in
thing seems to be Pixel Shader 3.0)

Which I think makes much more sense than rendering the whole scene by
the CPU, then storing it in main memory before shooting a chunk of
some 24Mbits of data per frame, for some erm 720Mbps across the
AGP/PCI bus to maintain a half decent 30FPS at 1024x768x32? Or doesn't
it?

Of course, being the village idiot in CSIPHC, I could be talking about
the wrong stuff in the wrong places altogether 😛pPpPp

--
L.Angel: I'm looking for web design work.
If you need basic to med complexity webpages at affordable rates, email me :)
Standard HTML, SHTML, MySQL + PHP or ASP, Javascript.
If you really want, FrontPage & DreamWeaver too.
But keep in mind you pay extra bandwidth for their bloated code
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

KR Williams wrote:

> Instead of telling people how smart you are, why don't you tell
> me what, in the graphics pipe, needs low-latency to the
> processor. Or you could just say, "I'm right you're wrong, go
> look for the needle in the hay-stack". Oh, you did.

I gave you some examples, you ignored them. I gave you some
references to look at, you ignored them. I don't really see
the point of writing a 2000 word essay on OpenGL hardware,
API and a specific algorithm when you're saying that OpenGL
is irrelevant.

You can lead a horse to water, but you can't make it drink.

*shrug*
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

In article <1084754163.22749@teapot.planet.gong>, roo@try-
removing-this.darkboong.demon.co.uk says...
> KR Williams wrote:
>
> > Instead of telling people how smart you are, why don't you tell
> > me what, in the graphics pipe, needs low-latency to the
> > processor. Or you could just say, "I'm right you're wrong, go
> > look for the needle in the hay-stack". Oh, you did.
>
> I gave you some examples, you ignored them. I gave you some
> references to look at, you ignored them. I don't really see
> the point of writing a 2000 word essay on OpenGL hardware,
> API and a specific algorithm when you're saying that OpenGL
> is irrelevant.

No, you didn't. You keep referring to the APIs, yet don't point
to anything specific. You don't teach anything with respect to
how these things affect performance. You can be as smug as you
wish, but...
>
> You can lead a horse to water, but you can't make it drink.

....you lie, Rupert.

> *shrug*

Indeed.

--
Keith
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

KR Williams wrote:
> In article <1084558988.486952@teapot.planet.gong>, roo@try-
> removing-this.darkboong.demon.co.uk says...

[SNIP]

>>I don't really get why you're grinding an axe against AGP to be
>>honest, it's just a faster and fatter pipe than stock PCI. No
>>big deal, and it does appear to make a difference, ask folks
>>who have used identical spec cards in PCI and AGP flavours.
>
>
> Oh, my! I've gone and insulted Rupert's sensibilities again.
>
> Your logic is impeccable. AGP is faster, and wider(?) than PCI,
> so it's god's (or Intel, same thing I guess) gift to humanity.

Not really. For me upping the framerate by ~20% made the difference
between a game being playable and it being unplayable. Not a big
deal in the world of rocket science, but that kind of thing matters
to a lot of folks who play games.

> Good grief, you compare a stripped point-to-point connection (PCI
> cut to the bone, actually) to a cheap PCI 32/33 *BUS*
> implementation and then proclaim how wonderful it is. Sure AGP
> is faster than the cheapest PCI implementation. Was that your
> whole point?

In that case, yes. Where were the alternatives to AGP that would
have provided the extra bandwidth, yet kept the characteristics
required to maintain backward compatibility AND do all that at
a minimal price point for both the vendor and customer ? I didn't
see PCI Express or PCI-X leaping into the chipsets at the time.

As unclever or ugly as AGP maybe, it has been an effective and
inexpensive solution for it's vendors and customers.

Cheers,
Rupert
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

In article <1084755234.917279@teapot.planet.gong>, roo@try-
removing-this.darkboong.demon.co.uk says...
> KR Williams wrote:
> > In article <1084558988.486952@teapot.planet.gong>, roo@try-
> > removing-this.darkboong.demon.co.uk says...
>
> [SNIP]
>
> >>I don't really get why you're grinding an axe against AGP to be
> >>honest, it's just a faster and fatter pipe than stock PCI. No
> >>big deal, and it does appear to make a difference, ask folks
> >>who have used identical spec cards in PCI and AGP flavours.
> >
> >
> > Oh, my! I've gone and insulted Rupert's sensibilities again.
> >
> > Your logic is impeccable. AGP is faster, and wider(?) than PCI,
> > so it's god's (or Intel, same thing I guess) gift to humanity.
>
> Not really. For me upping the framerate by ~20% made the difference
> between a game being playable and it being unplayable. Not a big
> deal in the world of rocket science, but that kind of thing matters
> to a lot of folks who play games.

How much is tat due to the faster pipe? ...and how much to what
AGP brings to the table? AGP brings nothing other than a faster
pipe.
>
> > Good grief, you compare a stripped point-to-point connection (PCI
> > cut to the bone, actually) to a cheap PCI 32/33 *BUS*
> > implementation and then proclaim how wonderful it is. Sure AGP
> > is faster than the cheapest PCI implementation. Was that your
> > whole point?
>
> In that case, yes. Where were the alternatives to AGP that would
> have provided the extra bandwidth, yet kept the characteristics
> required to maintain backward compatibility AND do all that at
> a minimal price point for both the vendor and customer ? I didn't
> see PCI Express or PCI-X leaping into the chipsets at the time.

Backwards compatibility? AGP was compatible with exactly what?
AGP was *designed* to simply allow the textures to be put in
system memory. A *very* bad idea. Indeed, perhaps AGP put off
better solutions many years.

> As unclever or ugly as AGP maybe, it has been an effective and
> inexpensive solution for it's vendors and customers.

Bad ideas are often pushed on the consumer hard enough that there
is no choice. I can think of many such bad ideas (some even
worse than UMA and AGP). WinPrinters and WinModems come to mind.
Intel was right in there on these too.

I may have a tough spot in my soul for Intel, dreaming for what
might have been (and technically possible), but you're a lackey
for what is. I'm quite sure you don't treat M$ so kindly for
*WHAT IS*.

--
Keith
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

KR Williams wrote:
> In article <tf4oc.7727$ZX2.6238@newssvr24.news.prodigy.com>,
> redelm@ev1.net.invalid says...
>
>>In comp.sys.ibm.pc.hardware.chips Robin KAY <komadori@myrealbox.com> wrote:
>>
>>>Why would you have your main processor(s) render a scene
>>>when you have a dedicated graphics processor to do it?
>>
>>I think you're talking 3-D while Keith is talking 2-D.
>
>
> Nope. 3-D is no different. AGP wuz supposed to make the
> graphics channel two-way so the graphics card could access main
> memory. DO you know anyone that actually does this? PLease!
> With 32MB (or 128MB) on the graphics card, who cares?

Read some of the "optimising your game for a modern 3D card"
presentations on the NVidia or ATI developer web sites. You want to
decouple the CPU from the graphics card as much as possible, to
eliminate "dead time" when the CPU waits for the card to finish
something, or the card waits for more data. The card has lots of RAM on
it, but the textures, vertex data, etc have to get into that RAM
somehow... and some applications have more texture or vertex data than
can efficiently fit into the card RAM. A 32MB card running at 1024x768,
with 24-bit colour, 8-bit alpha, 24-bit Z, 8-bit stencil, double
buffered, needs about 10MB of video RAM. Some games have more than 22MB
of total textures these days, and some vertex data is dynamically
generated for each frame. You need an efficient way to push the data up
to the card without forcing either the CPU or the card to wait.

Having the card do bus mastering allows the CPU to set up a big DMA ring
buffer for commands, which the card slurps from in a decoupled way, and
the card can then also slurp texture and vertex data from other memory
areas which are set up in advance by the CPU. There are special
primitives which allow the CPU to coordinate this bus mastering activity
so that they don't step on each other's data, while maintaining as much
concurrency as possible.

So that's the motivation for the card doing bus mastering. AGP brings
two extra things to the picture: higher speed than commodity PCI, and a
simple IOMMU, which gives the graphics card a nice contiguous DMA
virtual address space that maps onto (potentially) scattered 4K blocks
of memory.

>>In 3-D there's simply too much drudge work (shading,
>>perspective) and not enough interaction back to the control
>>program to need or want the CPU. 2-D is much simpler and often
>>requires considerable CPU interactivity (CAD) with the display.
>
>
> Sure, so why does the 3-D card want to go back to main memory,
> again? The graphics pipe is amazingly one-directional. ...and
> thus not sensitive to latency, any more than in human terms.

Exactly. By using bus mastering, you let the CPU and card work in
parallel, at the expense of increased latency for certain operations.
Reading back the frame buffer contents in a straightforward way (i.e.
with core OpenGL calls) is a really great way to kill your frame rate in
3D games, because you cause all the rendering hardware to grind to a
halt while the frame buffer data is copied back. The graphics card
vendors really, really want you to use their decoupled "give us a lump
of memory and we'll DMA the frame buffer data back when it's finished
baking, meanwhile keep feeding me data!" OpenGL extensions to do this.

-Jason
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

KR Williams wrote:

[SNIP]

> Backwards compatibility? AGP was compatible with exactly what?

Compatibility with pre-AGP software.

> AGP was *designed* to simply allow the textures to be put in
> system memory. A *very* bad idea. Indeed, perhaps AGP put off
> better solutions many years.

If you consider putting shitloads of RAM onto the graphics card
a solution I don't think it slowed that down at all. What it did
enable was low-cost solutions *at the time it came out*, the kind
of solutions that would suit kiddies who would break their piggy
bank to play a game.

[SNIP]

> I may have a tough spot in my soul for Intel, dreaming for what
> might have been (and technically possible), but you're a lackey

OK, I'll bite. What might have been when AGP was first mooted ?

> for what is. I'm quite sure you don't treat M$ so kindly for

In the context of this discussion your assertion of being a "lackey
for what is" is wrong anyway. It flatly ignores my preference which
is render into main memory and DMA the framebuffer to the RAMDAC.
Nice and simple, lots of control for the programmer. However I do
recognise this is not a good solution right now because of the way
the hardware is structured and the design trade-offs.

> *WHAT IS*.

I never have liked MS stuff to be honest. Never liked x86s either,
but on the other hand Intel contributed heavily to PCI and on
balance I think that has been a valuable contribution to the
industry as a whole.

Cheers,
Rupert
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

In article <1084811783.515695@teapot.planet.gong>, roo@try-
removing-this.darkboong.demon.co.uk says...
> KR Williams wrote:
>
> [SNIP]
>
> > Backwards compatibility? AGP was compatible with exactly what?
>
> Compatibility with pre-AGP software.

Come on. That's trivial for any port. Map the addresses in the
same range and go for it.
>
> > AGP was *designed* to simply allow the textures to be put in
> > system memory. A *very* bad idea. Indeed, perhaps AGP put off
> > better solutions many years.
>
> If you consider putting shitloads of RAM onto the graphics card
> a solution I don't think it slowed that down at all. What it did
> enable was low-cost solutions *at the time it came out*, the kind
> of solutions that would suit kiddies who would break their piggy
> bank to play a game.

That's *precisely* what I advocated at the time. Memory sizes
grew (and costs fell) to where this was not only possible, but
mandatory at the same time AGP became available. Indeed the only
things that used AGP (system memory resident) textures were Intel
demos. Impressive, but hardly useful. Graphics cars have
outstripped AGP usage ever since.
>
> [SNIP]
>
> > I may have a tough spot in my soul for Intel, dreaming for what
> > might have been (and technically possible), but you're a lackey
>
> OK, I'll bite. What might have been when AGP was first mooted ?

The first day it was shipped in a product. Graphics cards were
even then shipping with more (texture) memory than the games of
the day were using. It was a *bad* idea, much like UMA. Memory
is and was cheap. 32MB cards were normal then, and 128MB cheap
now. There would be even more memory on graphics cards if there
were a reason. Like I said earlier, even my 2D card has 32MB.

> > for what is. I'm quite sure you don't treat M$ so kindly for
>
> In the context of this discussion your assertion of being a "lackey
> for what is" is wrong anyway. It flatly ignores my preference which
> is render into main memory and DMA the framebuffer to the RAMDAC.

Oh, my! NO wonder we disagree so much. I have *NO* interest in
bottling up main memory with such trivia. I'm sure you liked UMA
too. Let me ask you; Do you have an integrated UMA graphics
controller on your system?

> Nice and simple, lots of control for the programmer. However I do
> recognise this is not a good solution right now because of the way
> the hardware is structured and the design trade-offs.

It is a *horrible* idea. It puts too much stress on the exact
wrong area of the system. UMA not only affects memory bandwidth,
but latency. I'd rather not give up either and throw it all at
the processor. Perhaps it's because I know what's possible in
hardware and you're simply dreaming of a perfect world (again).

> > *WHAT IS*.
>
> I never have liked MS stuff to be honest. Never liked x86s either,
> but on the other hand Intel contributed heavily to PCI and on
> balance I think that has been a valuable contribution to the
> industry as a whole.

I'm not a PCI fan either, but it is what is. I've gotten over my
anger at stupid marketing and have learned to accept the
inevitable (and have even designed to it, though it's
unnecessarily ugly).

I've never had an issue with x86. I even liked segmentation,
unless I had to do huge data structures. :-( ...which was rare
as a hardware type. 🙂

Amazing the difference in perspective between hardware wonks and
software weenies. ;-)


--
Keith