Kyro 2 the killer of nvidia ???

Page 11 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

noko

Distinguished
Jan 22, 2001
2,414
1
19,785
I assumed what? Do you think most people have >1ghz systems? What would the average PC CPU speed be? Frankly I don't know and really it is not important what I think. What is important (at least to me) is accurate information. If someone has a celeron 500-800 machine (hopefully with a AGP port) buying a KyroII to play Max Payne may not be as good as he/she thought. Any HW T&L dependent game would suffer unless the person has a sufficient cpu to overcome the lack of processing power on the graphic chip. What someone buys I hope is from a freedom of choice. If someone wants to buy a KyroII with a 1.7mhz Pentium 4,power to him. If someone wants to buy a KyroII with a k62-400 thats fine also. I just hope both of the above buyers understand the strengths and weaknesses of the KyroII card to make a wise choice. The FSAA of the KyroII is outstanding and for flight simulators that feature alone makes its a very good choice. HW T&L in a flight simulator is really worthless but outstanding FSAA on the graphics card makes a totally different experience for a flight simulator user. Also note that the 3d Prophet 4500 is limited to AGP2x. The AGP2x limitation could limit a more powerful cpu from increasing performance on a KyroII due to a bottleneck between the cpu and graphics chip, just remember I said could (I am not sure either way). Thanks for the feedback.

<P ID="edit"><FONT SIZE=-1><EM>Edited by noko on 04/16/01 04:27 PM.</EM></FONT></P>
 

noko

Distinguished
Jan 22, 2001
2,414
1
19,785
The outstanding FSAA makes the KyroII a prime candidate for a Flight Simulator Dream machine. A very fast CPU and the KyroII would be awesome for Flight Simulator 2000. That was my resoning. FSAA ability.
 

noko

Distinguished
Jan 22, 2001
2,414
1
19,785
What do most people own as in CPU's and speeds? 1 - 1.33ghz T-Birds or more likely celerons 500-800? You tell me, I don't know. Anyways a very powerful CPU and a KyroII would be very interesting to see benchmarks on and its scalability. Good point. My concern would be the AGP bus especially limited to 2x transfer speeds could become a limit, preventing any further performance increase.
<P ID="edit"><FONT SIZE=-1><EM>Edited by noko on 04/16/01 05:00 PM.</EM></FONT></P>
 

Negaverse23

Distinguished
Dec 31, 2007
449
0
18,780
<<noko why you said this?

"Now configuring my CPU speed to a more typical system that a Kyro owner would have I did the following:"

one could go to a better cpu and a kyro saving some money, without losing performance... maybe earning some performance...
;)
good cpu's are getting cheap !!!>>

Ok,.. Take the average PC user, such as me. The fastest CPU that I can use in my current system is rated 533Mhz PPGA Celeron. Because my mainboard is not FC-PGA compliant, I would have to get a new mainboard if I want to get a CPU rated 550Mhz+ (AMD or Intel, depends on new board.). I'm stuck at sub 533Mhz for now (No O/Cing).

I would have to add the cost of a new mainboard, CPU, system memory in place of my old PC-100 SDRAM, and then the graphics card. Or I could just keep what I have and get a new graphics card.

=
<font color=green>Ran out of bullsh!t to feed to the flies.</font color=green>
 

Ncogneto

Distinguished
Dec 31, 2007
2,355
53
19,870
Actually I did not take issue at all with your assumption on the fact you thought that the average CPU of a home user is around 500 -800 MHZ. I am sorry if you interpreted my post that way. All I was suggesting was that it appeared that you were thinking that only the budget oriented users ( such as the ones that have these 500 -800 systems) would take a serious look at this card ( the kyro-II ).
What I was trying to point out was there may be a strong point for this to be exactly the opposite line of reasoning to take. Maybe the Kyro-II should aimed more at the higher end systems in which doing T&L calculations VIA software by the systems processor to not take such a toll and load down the CPU itself. Heck, we might as well use these high powered CPU's for something correct? Of course, I maybe totally off here as for I have yet to see any testing done on the KYRO-II whith such a system. What I do know is this, so far from the tests I have seen the Kyro-II scales very well. We know from previous test done at THG that the nvidia offerings do not scale much past 850 MHZ as the memory bandwith of the video card becomes an issue ( in the test that a memory bandwith hungry of course ala quake and the like). We also know that The gforce ultra's hold a slight edge over the Kyro-II when coupled with these mid-range CPU's. Now, if my logic were to in fact prevail here and the Kryo scales much higher than the gforces, then when coupled with a high end system, in the test that the kyropII was trailing it may make up the differnce and overtake the gforce ultra's. Now wouldn't this just make Nvidia green? Afterall, one would tend to think that is the very market that their card is marketed at?
So, does anyone have any results with such a setup? It would appear that powervr has a kyro-II can he shed any light on this situation?
Also, as long as we are talking about OEM and $$$ I still have yet to see anyone comment on the viability of an onboard chipset with the KYRO-II. We all now that current onboard video solutions on severly limited do to the fact of the very limited bandwith available to them to access system RAM. It would seem like the low bandwith requiremnts of the Kyro-II and a cheap integrated solution may indeed be a match made in heaven for the two. At least this would give the company the $$$ it needed to further there research and developement into this very promising area. Heck, there maybe a chipset manufactorer out there thinking this very thing getting ready to gobble powervr tech up.

A little bit of knowledge is a dangerous thing!
 

HolyGrenade

Distinguished
Feb 8, 2001
3,359
0
20,780
The nVidia Crush Chipset will feature an LMA architecture connecting the integrated GPU (nv17) to the memory. Boards based on this chipset for the AMD platform will cost around £120, and should be available in 2001 Q3 (I think).

I think that should be a pretty good solution for budget systems. It will also feature an AGP Port allowing other graphics cards to be added, but the onboard chip should be an interesting solution.


<font color=red>"My name is Ozymandias, King of Kings:
Look on my works, ye Mighty, and dispair!"</font color=red>
 

Ncogneto

Distinguished
Dec 31, 2007
2,355
53
19,870
Could you extrapolate on that a bit Mr. Grenade? I do not profess to be quite as knowledgable on all the intracasy's of this as others, however I have found this to be a very enlightening thread packed full of knowledge ( something that is getting rarer and rarer on this forum). What exactly is LMA architecture? Is this a solution that would give an intgrated solution its own pathway to access system memory seperate from the processor's pathway so the two are not competing for bandwith ( this has been the crux of every integrated video solution made to date ). And do you not think that the kyro may be an excellent canidate for a integrated solution as well?

A little bit of knowledge is a dangerous thing!
 

noko

Distinguished
Jan 22, 2001
2,414
1
19,785
Excellent points. I would love to know the scalability of the KyroII with the fastest available processors. That would be funny if the KyroII turns out to be the performance king with 1.33mhz and faster CPUs. Nvidia and a few other folks would be going back to the drawing boards. Serious Sam on the KyroII could be like QuakeIII was for GF2s. How many more FPS can be had on the KyroII in Serious Sam if the CPU speed was upped? Serious Sam has an excellent engine which isn't designed around HW T&L and I am sure more games will be using the Serious Sam engine. Well 3dfx has lefted but sure is nice that someone else is filling the vacuum. :smile:
 

HolyGrenade

Distinguished
Feb 8, 2001
3,359
0
20,780
It is ironic that I work in the Knowledge management industry, and am being questioned about my knowledge. And also how your signature professes about knowledge.

Unfortunately it is difficult to pass knowledge on, so I'll provide you with the information of a couple of links. If you need clarifications, do not hesitate to ask.


<A HREF="http://forumz.tomshardware.com/modules.php?name=Forums&file=faq&notfound=1&code=1" target="_new">A Brief discussion into the crush chipset in the CPU section</A>

<A HREF="http://www4.tomshardware.com/graphic/01q1/010227/geforce3-31.html" target="_new">Tom Explains LMA</A>


<font color=red>"My name is Ozymandias, King of Kings:
Look on my works, ye Mighty, and dispair!"</font color=red>
 

Ncogneto

Distinguished
Dec 31, 2007
2,355
53
19,870
I am not quite sure I understood what you meant by that. It just a signature, a quote I may add, from just where it escapes me at the moment :) It it no way was to profess to me as being all knowledgable, for nothing could be further from the truth. And I did not call into question the validity of your knowledge but, I am not sure if that is in fact what you meant by that either. At any rate thanks for the link.

A little bit of knowledge is a dangerous thing!
 

HolyGrenade

Distinguished
Feb 8, 2001
3,359
0
20,780
Don't worry, the Irony is I work in the knowledge management industry, yet don't believe in Knowledge management. At least, not at its current state. I find my self more interested with computer games architectures, So am training my self (when I get time) in 3d game programming, as you may gather from some of my posts.

Not to worry, the comments were in no way a jibe towards you, and I like the Sig you got. its cool.


<font color=red>"My name is Ozymandias, King of Kings:
Look on my works, ye Mighty, and dispair!"</font color=red>
 
G

Guest

Guest
<<<<<<What is Geometry Assist?>>>>>>

Geometry assist is what 3dFX did in there latter Voodoo5 drivers just before they went out of business, its basically a driver trick that fools the HW T&L game into thinking the 3d card has a HW T&L unit, then the driver uses the system CPU to emulate the HW T&L unit, Voodoo5 cards actually got a good boost in speed over using the SW T&L engine with this trick.

<<<<<<It will be a good 5 yrs before any CPU is released capable of matching the raw processing power of the GeForce 2 GTS GPU.>>>>>>>

I won't be anywhere near 5 years until the CPU is capable of maching the real processing power of the Geforce 2 GTS.

<<<<<<<1024x768 is still excellent with T&L. it beats the Kyro2.>>>>>>>>>

Thats a totally generalized statement, in what game?, which card with a HW T&L unit are you talking about?, just to say HW T&L beats the Kyro II at this resolution means nothing at all.

<<<<<<<<I think The Kyro is limited to 8 layers in a single pass. All the Kyro review sites keep saying upto 8 layers. The Quake 3 Engine supports upto 10 layer cascading in its multi texturing. The minimum is two.>>>>>>>>>

No your wrong, review sites may say that but what do they work on?, the specs written on the box probably, I know its not limited to 8 textures in this area, I've already said the card is limited only by 8 textures in a single pass being the limit in DX8, IMGTEC did not want to advertise a feature that couldn't be used in both API's, the Kyro II can use 8 textures in a single pass because it keeps the tile its working on inside the chips catch until it finished adding all the textures, if it was supported in a game it could add more textures to the tile inside the chips catch and still only pass it out to ram once.

<<<<<<<<<I may be wrong but I don't think the polygons have to be reloaded in the 'traditional' Multitexturing card>>>>>>>

No they don't have to reloaded the polys when only 2 texture layers are used on a card with 2 TU's because it can do 2 textures in 1 pass, but once the card goes from single pass multi-texturing to multi-pass multi-texturing poly's then have to be reloaded for each additional pass, so after the first 2 texture layers have been sent to ram the polys then have to be reloaded for the second 2 texture layers and then again for another 2 and again for the final 2 texture layers (thats with 8 texture layers)

<<<<<<From Direct3D6 up to 8 texture operation units can be cascaded together to apply multiple textures to a common primitive in a single pass (multitexturing)>>>>>>>

AFAIK there's nothing in DX that allows a traditional with say 2 TU's to send out 8 textures in only 1 pass to the framebuffer, or am I misunderstanding you?

<<<<<<<Appart from the Base skin Texture, Doom 3 is only certain to have only two others: Bump Mapping and Dot Products. I'm sure there will be more.>>>>>>>>

Well Carmack said the game would have an overdraw of 8, allot of people first thought he meant actual opaque overdraw as in writing over already written pixels in the framebuffer, but then the general consensus was he couldn't possibly have that level of overdraw in his game because it would totally kill the performance of any traditional so everyone then thought he was talking about 8 texture layers, if he wasn't talking about either of them then what was he talking about when he said 8 overdraw?, any ideas?

<<<<<<As it currently stands, Doom 3 will not contain software Geometry code. But once the game is released, there may be some development for a dreamcast port with severe limitations. The non-T&L community can hope this will also bring a Software geometry pc release.>>>>>>

Were do you get that from, were did he say this?, I'd appretiate a link for this comment, again though geometry assist will sort that out.

<<<<<<<<You commend the Kyro for its innovative Tilers but revert to the preference of redundancies with the wasteful Multitexturing over Pixel Shaders?

Besides, the Doom 3 engine will be scalable, allowing the features that require programmable shaders to be disabled. The penalty is much of the eyecandy will also disspear.>>>>>>>

Thats what I'm saying, pixel shaders won't be an integral part of the game, he'll use them but not to a huge extent so that none pixel shader cards can just turn the option off and still keep enough visual quality to make the game look awesome on cards without pixels shaders.

<<<<<<<That is just rubbish. Not every one aquires technology to pull a "Microsoft" i.e. just to bury the competition and forget about them. You Aquire technology to to use it. Whats better than having that technology in your products rather than in your competition.

They aquired 40 former SGI staff with the Gigapixel technology from 3DFX. They will be busy in implementing their designs into nVidia Technology. This means Tiling just like the Kyro II cards. I don't think nVidia will be furthering the development into napalm or rampage for release products. But they will be researched into, to resolve any potential issues, and to use any good concepts and ideas from those designs.>>>>>>>>

No it isn't rubbish actually, from what I've heard they won't touch tile based rendering, thats what I'm hearing from people that are very very close to the industry, but I don't expect anyone to actually beleive me since nobody here knows me or the people I'm talking about, and I wouldn't want to drop names either, but obviously Nvidia could end up going tile based, IMO they actually have to, maybe they'll realise that too soon.

<<<<<<<Ahh, There will always be faster ram, providing you spend enough towards it. Come on, Every one except intel must be grateful about them speeding up the ddr development.>>>>>

Ram jut isn't moving fast enough, its taking longer and longer to produce faster ram now, Nvidia can only seem to add features to there new cards, the days when a new Nvidia card comes out and wipes the floor with the last one in performance seem to be gone.

<<<<<<And Yep! Doom 3 will have 3D textures.>>>>>>>

Apparently Geforce 3 doesn't support 3D textures though so thats already one feature in Doom 3 that the Geforce 3 doesn't have.
 

noko

Distinguished
Jan 22, 2001
2,414
1
19,785
This is in regards to textures and fillrate again. We allready calculated out that the Kyro2 with 8 textures will have a fillrate of 44mpixels/sec. Going to 10 or 12 texture would even further kill the fillrate of the Kyro2 further, 10textures => 35mpixels/sec, 12textures => 29mpixels/sec. The Kyro2 would be so fillrate limited at 12 textures that the frame rate would be so degraded as to be useless. Now the GF2 can do 4pixels with 2 textures in one clock. In order for the Kyro2 to process 4 pixels with 2 textures it would take at least 4 clocks. Please explain the difference in the pixel pipeline between the Kyro2 and GF2. 8 textures can be processed on the Kyro2 on a pixel except it will take 8 clocks in order to do it. While the GF2 I understand it would take 4 clocks to do 8 textures. Still seems like the GF2 would be able to process more pixels and always have a much higher fillrate than a Kyro2. Please explain.
 

HolyGrenade

Distinguished
Feb 8, 2001
3,359
0
20,780
Some Quick info before I fetch the links:

nVidia acquired the 40 staff that left SGI to for Gigapixel along with 3DFX. If you want to just takeover a company just to kill it, you make some redundancies or have them dedicated to your own projects.

The Gigapixel staff left SGI because they thought their work and beliefs will not get the deserved attention in SGI. Do you think they will remain in nVidia if they were to be kept dormant?

The DX8 multitexturing is done in one go (as far as the dx api is concerned). But, the graphics card driver probably handles the necessary details. Thats how it seems to me from looking at the api.

And the GF 3 does support 3d textures and according to John Cormack, it does so to a better extent than the Radeon.

I'll give you some links later.



<font color=red>"My name is Ozymandias, King of Kings:
Look on my works, ye Mighty, and dispair!"</font color=red>
 
G

Guest

Guest
if we count on the overdraw (a complex game with more poligones will have more overdraw) then we shall see a boost on the kyro fill rate...
if a game has 8 layers textures then there will be many "objects" with 8 layers that will be obscured.... more fill rate on the kyro ...
if a typical game has 3-4 times the overdraw a typical 8 layer game could have more of this overdraw
;)
fill rate problems ?
see what happens on sam (a game that have typically 3-4 layer textures) kyro have so much fill rate and bandwidth left that in higher depths kyro enjoys the king position...

by the way ... 3d texture will not be supported on geforce 3... they have taken the 3d texture from the specification of geforce 3!
I wonder why...
 

Negaverse23

Distinguished
Dec 31, 2007
449
0
18,780
<i><<if we count on the overdraw (a complex game with more poligones will have more overdraw) then we shall see a boost on the kyro fill rate...>></i>

True, the fill rate of a GPU with HSR just won't drop as much as a GPU without HSR because of overdraw.

<i><<if a game has 8 layers textures then there will be many "objects" with 8 layers that will be obscured.... more fill rate on the kyro ...
if a typical game has 3-4 times the overdraw a typical 8 layer game could have more of this overdraw
;)
fill rate problems ?>></i>

By the time we see games using 8 layers, I will have a Radeon 3.

<i><<see what happens on sam (a game that have typically 3-4 layer textures) kyro have so much fill rate and bandwidth left that in higher depths kyro enjoys the king position...>></i>

King position? Couldn't say the same on a sub 500Mhz system.

<b>HolyGrenade</b><i><<And the GF 3 does support 3d textures and according to John Cormack, it does so to a better extent than the Radeon.>></i>
<b>powervr2</b><i><<by the way ... 3d texture will not be supported on geforce 3... they have taken the 3d texture from the specification of geforce 3!>></i>

I'll have to look up some info on this later.

=
<font color=green>1Ghz on 56K access? I like my 400Mhz with cable access. ;)</font color=green>
 
G

Guest

Guest
The reason its looking like that to you is because your looking at raw peak fillrates as if they translate to real performance, the thing to remember is memory bandwidth is what limits graphics cards today (except for PowerVR cards), if you take a Kyro II and a GTS and give them both umlimited bandwidth then unless games come out with really high overdraw like 4-5 overdraw the GTS would always win because its raw peak fillrate is so much higher (its peak pixel fillrate is four time higher then the Kyro II when using more then 1 texture layer), but in reality the GTS is hugely limited by memory bandwidth to the point that its real fillrate is more like half that (or less), now consider that when using so many texture layers what the Kyro II is doing with its 8 layer multi-texturing is using up lots of fillrate (8 clock cycles) but once again using its clever technology to conserve memory bandwdith by only passing the 8 texture layered pixel to the external framebuffer once, while the GTS is saving raw theoretical fillrate by only needing 4 clock cycles but wasting memory badnwidth by needed 4 passes to the external framebuffer, now the GTS is saving theoretical power, but it can never use that power unless it has the bandwidth, and its throwing bandwidth away hand over fist with its numerous passes, so its important to remember that when looking at pixel pipes and number of TU's on those pipes thats all theoretical with unlimited memory bandwidth, so in the end the only thing that really matters is having a good amount of raw fillrate and having as much memory bandwidth as possible (or using that memory bandwidth you have as efficiently as possible). On the point of 10 or 12 texture layers yes the Kyro II really wouldn't handle that amount of texture layers at a good speed (although you also have to remember that it only processes visible pixels) but it would handle them allot faster then a GTS because it would still have the memory bandwidth to use its full raw theoretical fillrate.

Quote from Negaverse23
"King position? Couldn't say the same on a sub 500Mhz system."

Why do you say that? Serious Sam isn't optimised for HW T&L so on a slower system the Geforce or Radeon cards would loose the same performance as the Kyro II.

"I'll have to look up some info on this later."

Nvidia have taken 3d textures off the Geforce 3 spec sheet, thats why I originally brought up the fact that the Geforce 3 doesn't support 3d textures.

<P ID="edit"><FONT SIZE=-1><EM>Edited by Teasy on 04/17/01 03:14 PM.</EM></FONT></P>
 

HolyGrenade

Distinguished
Feb 8, 2001
3,359
0
20,780
This is a link Warden sent me. Here John Carmack writes what he thinks about the GF3. <A HREF="http://www.webdog.org/plans/1/" target="_new">http://www.webdog.org/plans/1/</A>. He talks about the features including 3D textures in the GeForce3!

<A HREF="http://www.webdog.org/cgi-bin/finger.pl?id=1&time=20000601040557" target="_new">Here</A> you can find lots of information including some brief info on what Carmack thinks about some graphics cards. Its a bit old though.

<A HREF="http://doomworld.com/files/doom3faq.shtml" target="_new">Here</A> You can find more (kind of) technical info on the Game engine. Some of the comments from carmack suggest that the game will "need" hardware transform. For example the requirment of a 700Mhz cpu but a high end graphics card.



<font color=red>"My name is Ozymandias, King of Kings:
Look on my works, ye Mighty, and dispair!"</font color=red>
 
G

Guest

Guest
When Carmack talks about the Geforce 3's features he's most likely talking about the specs Nvidia first gave, Nvidia did at first include 3d textures in the Geforce 3's spec's but they have now taken that spec out, so too me that says the card doesn't really have 3d textures, maybe they found the feature is broken or something.

I really don't think any comment about a 700mhz CPU and a high end vid card is enough to assume he won't include software T&L, it could just as easily mean that for Doom 3 he'll be putting more emphasis on fillrate and effects then on the amount of polys used, I'm not saying he'll do that but his comment could just as easily mean that, until I here him say that Doom 3 won't include SW T&L I won't beleive it, BTW where does he say anything about a 700mhz CPU and a highend graphics card?, I browsed through that Q&A but I couldn't see that comment.
 

noko

Distinguished
Jan 22, 2001
2,414
1
19,785
Wow! thanks for the links. See those pictures of DOOM?? Just down right amazing. Too bad they are talking about this game for next year. Still it is a indicator what is ahead just like Max Payne, which should be released this summer. I hope. Thanks.
 

HolyGrenade

Distinguished
Feb 8, 2001
3,359
0
20,780
He was talking about the sample card he got from nVidia. I hope 3D texture remains in the card or gets put back in one of the variations of the cards that are definate to come out sometime this year. 3D textures would make games way more realistic. For example, bits of an object chipping off etc.

The spec details are in the <b>What sort of system will be required to play Doom 3?</b> Question. ;-)

My point is, if the best non-t&l card, the kyro II, needs around 700Mhz CPUs to run current games at high fps, how will it run doom 3 at decent speeds. They way he is talking about it, it will be a monster of a game. (excuse the pun)


<font color=red>"My name is Ozymandias, King of Kings:
Look on my works, ye Mighty, and dispair!"</font color=red>
 

HolyGrenade

Distinguished
Feb 8, 2001
3,359
0
20,780
Those are ingame pictures as well. If I haven't already done so, I must thank warden for posting those links to me.

There is a Realplayer and Windows media video of the mac preview of the GeForce 3 featuring Doom 3. I'll post the link if I can find it again. I saw the demonstration on a program called 'Ex Machina' on .tv technology. They look awesome! It looks like some pre-renderred scene of enough quality to put in movies. (ok maybe not that good). But still far beyond the quality of anything out at present!




<font color=red>"My name is Ozymandias, King of Kings:
Look on my works, ye Mighty, and dispair!"</font color=red>
 

noko

Distinguished
Jan 22, 2001
2,414
1
19,785
From the pictures it looked like it came from a movie or at least close. I agree with holygrenade, also I conclude that without a T&L engine with games that uses advance physic calculations of ojects such as in Max Payne and as indicated by DoomIII upcoming engine, a non T&L card would be at a great disadvantage. Unless you have a very powerful CPU as in 1.1ghz or better then you can't expect good performance in games like Max Payne and Doom3 and other games similar to them.
 
G

Guest

Guest
<font color=red> There is a new review </font color=red>

<A HREF="http://www.dansdata.com/prophet4500.htm " target="_new">http://www.dansdata.com/prophet4500.htm </A>

I don't have to tell you that it´s a positive review.
;)
 
G

Guest

Guest
just to keep things even...

lets hear what some more game developers think. from an interview with tim sweeney a well respected developer from epic games:

ps - What's your take on the kyro2 and tile based rendering?

Tim - It's a competent TNT2 class chip, and the sorting and alpha-testing artefacts of past generations seem to have been sorted out successfully. But, like every generation of PowerVR hardware before it, it's a day late and a dollar short. It lacks support for basic DirectX7 (yes, 7!) features like cube maps. The kyro developers are cool guys, so it pains me to say that this is just not a viable piece of hardware in the market it's trying to compete in.

<A HREF="http://www.voodooextreme.com/articles/askingmrsweeney.html" target="_new">http://www.voodooextreme.com/articles/askingmrsweeney.html</A>

also the cheapest MX card right now on pricewatch is $57 for 32MB. $105 for 64MB. quite a bit cheaper ($32/$54) than the kyro and the kyro is still not available. it is very possible that the geforce3 will beat it out and push prices down even further. i hear the elsa card will drop by at least $100 very soon.


i had a drink the other day... opinions were like kittens i was givin' away
 

TRENDING THREADS