Kyro 2 the killer of nvidia ???

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
In 1999, when GeForce with T&L came, everybody was excited, everybody was praising T&L, everybody was sure that "future games" take advantage of it, "future games" need it, "future games" can't be played without it.

It's 2001. This thread is still full of arguments that "in the future", when games take full advantage of T&L, KyroII is crippled and lacking and going down...

T&L -> Vertex Sharers in DX8. I think nobody will use T&L in "classical form", true eye candy needs VS, I think...

The future are Kyro3 and Radeon2, not GeForce MX 🙂
 
Nobody said the Future is GeForce MX. I'm already enjoying several T&L titles on my GeForce 256 DDR. The programmable Shaders allow the game developers to utilise the T&L Engine etc. to suit them. Any game with programmable vertex shaders will also support the standard shader routines. They have a much higher chance of supporting them rather than CPU based Geometry and lighting.

<i>"in the future", when games take full advantage of T&L, KyroII is crippled and lacking and going down... </i>

The kyro is already showing its weaknesses in Mercedes Benz Racing and Aquanox. They play fine on my age old GeForce 1. Its true, the take up on T&L has been slow. But, the XBOX will ensure the same doesn't happen with programmable shaders. There are already games that support its features.

Besides, nVidia already have a programmable card. No one knows anything about the Kyro3 or the Radeon2. Its funny how you're comparing these two unreleased cards with the weakest of the nVidia GeForce series.


<i><b><font color=red>"2 is not equal to 3, not even for large values of 2"</font color=red></b></i>
 
Yap ...
you are comfirming my point ...
ok english is not my primary language not even my second language...
we were talking about a video card not about who talks better the english language...
are you telling that because I don't grasp very well the english the kyro card sucks ???
lol
😉


to noko yes there will be a linux driver..
I am not sure if it will be avaiable on the kyro 2 release...


<P ID="edit"><FONT SIZE=-1><EM>Edited by powervr2 on 04/09/01 02:24 PM.</EM></FONT></P>
 
Ok, holygrenade, I got carried away a bit... sorry for that, but I don't believe that you play Aquanox 🙂 as far I know only benchmark based on this game is avalilable... or is it demo?
 
soz... Benz Racing plays fine. I'm still waiting to try Aquanox.


<i><b><font color=red>"2 is not equal to 3, not even for large values of 2"</font color=red></b></i>
 
You clearly stated the mention developer comments and your intepretation seems to be right on the money as far as I see. I was just hoping for a reason why software T&L is hindering the hardware T&L when the game includes both methods. Maybe to use the CPU you have to design the game around system memory allocations, operating system calls and tricks of the trade to effectively use the cpu. While if you use the GPU everything is done on board the video card and the game can be streamlined without all the extra functions needed to get the non-specialize CPU to perform the T&L tasks. In some ways it does make sense to me in others not. Maybe a programer can enlighten us.
 
AMD PROCESSORS AND VIA CHIPSETS RULE

-- They have found a way to harness the power of a thunderstorm and expell it with great force!--
 
Thats great, could you give me a reference for the linux driver or approximate time it is expected to be released. Since the Kyro has been around for awhile are there current linux drivers? I hope so because right now it looks like my only good choice for a linux video card is nVidia (which is fine with me). Supposenly the Radeon will have 3d support shortly but how good is anyones guess. Thanks.
 
I've done a lot of programming but never actually made any games that need Hardware 3D, but i've been looking into it.

A lot of the problem you described are true. But, in any case the System memory has to be used. In CPU based operation it is used directly, and when T&L is used them memory is still used to fetch and store the stuff off the hard disk. On top of that, The system memory is also used for several other tasks as is the CPU.

But the main problem I can see is that the GPU is far too different to the CPU, for writing code that is re-usable across both platforms. This means almost all the code has to be duplicated. This means almost twice the code, and thus twice the development time.

Another problem is that the models and the rest of the 3D work have to be duplicated, to scale properly with both platforms. If it is optimised for the T&L engine, its gonna be slow for the CPU based calculations. And if it is optimised for the CPU, well... there is no point having the GPU. So, a comprimise is reached. How that happens, is still beyond me, but i'm still looking into the subject so perhaps we'll find out in the future. Unless some experienced developer sorts it out for us.


<i><b><font color=red>"2 is not equal to 3, not even for large values of 2"</font color=red></b></i>
 
Well, powervr, maybe that Kyro is doing so good because of tile-rendering?! Get a Kyro 2, try running anything that is designed for hardware t&l. no wait, trying a Kyro2 on Giants. I try to run it with software T&l, i makes my radeon feels like a tnt2
 
English translation of the Kyro II review from RivaStation is now available here:
<A HREF="http://www.rivastation.com/3dprophet4500-64mb_e.htm" target="_new">http://www.rivastation.com/3dprophet4500-64mb_e.htm</A>

=
<font color=green>Running Celery 400 since 1999.</font color=green>
 
Thanks for your very good reply. Your explanation does clear up the problem between optimizing for either CPU or GPU T&L for me. I just hope the different T&L engines (Radeon, GF1, GF2, GF3,....)don't need to be individually programed for optimum performance.

<P ID="edit"><FONT SIZE=-1><EM>Edited by noko on 04/09/01 07:21 PM.</EM></FONT></P>
 
I agree with holy... but the problem is that there are other issues as well...
like bandwidth... etc... that are even more important !!!
by the way..
T&L only games are far far away in the future...

Did you know that tnt-2 M64 and vanta are the best sellers (even now) of nvidia products?
if they make a T&L only product will they make $$$ ?

I read about in a interview (I don't remember the link) that they (imagination) are working on a linux driver...
but in that they only refer that they were working..
they didn´t give more information than that...

<P ID="edit"><FONT SIZE=-1><EM>Edited by powervr2 on 04/09/01 07:56 PM.</EM></FONT></P>
 
holygrenade -- Yes, thank you for that detailed reply. My ideas on why software and hardware T&L didn't mix were along these lines, but I didn't really <i>know</i>, so I kept my yap shut. :wink: The way you explained it makes sense, so thanks for the clarification.

Regards,
Warden
 
noko – I don't think the plain T&L engines are very different between GF1, Radeon, etc. DirectX 7 defined what the so called "hardwired" T&L units should do, and I think, despite minor hardware differences, that they all appear the same to the software. As for the different shaders in DX8 I am not so sure. There may be differences needed in software depending on the card (that is, once ATI and others release a DX8 card). But since the shaders <i>are</i> defined in DX8, rather than being proprietary, maybe they can be all inter-compatible.

Cheers,
Warden

<P ID="edit"><FONT SIZE=-1><EM>Edited by Warden on 04/09/01 09:34 PM.</EM></FONT></P>
 
Bandwidth, speed or rate which data can be sent and received could be used as a definition. The DDR boards have more bandwidth than a SDR version for the same number of bits of transfer. The Radeon and the GF series with DDR ram have more bandwidth than a Kyro II card with SDR. Now how effective is the design using that bandwidth is the question. The KyroII very effectively uses that bandwidth while the other solutions have some waste as in texturizing unseen polygons, Z-buffer reads and writes plus the T&L engine itself requires some bandwidth when being used. You make a interesting point, most cards being sold now (in numbers) are non T&L cards. I don't know the exact number but all ATI products except for the Radeon chipset are non T&L cards, all Nvidia except GF series, all Matrox, All S3 (except s3 2000 chip), all 3dfx cards and other chip makers are non T&L chipsets. Still the number of T&L enabled cards are increasing and for a developer to be able to cater to a very fast growing segment of video cards would be smart. Much money could be earned if you have a killer game that just smashes anything like it, maybe T&L would be the ticket, I don't know. Plus what alot of people I believe are missing is that the programability of the GF3 saves tremendous amount of bandwidth. What the GF3 can do in real time is way beyond what an Athlon 1.33ghz could do or a PentiumIV by itself. What the GPU in GF3 can do in Vertex programing is very impressive. In short manipulating data using programmable vertex and pixel shaders in the GPU saves tremendous amount of bandwidth needed for the same effect. I hope the KyroIII is not only a T&L card but a DX8(9) (programmable) compliant. Thanks for the info on what you know about support for linux, I will see if I can dig something up on it.
 
SassVonPass,
T&L -> Vertex Sharers in DX8. I think nobody will use T&L in "classical form", true eye candy needs VS, I think...
I don't think that is quite true. If you think about it, most of the pixels in a scene can be rendered with plain T&L. Only the "flex" points on models, waves of water, etc. will need the Vertex Shader. In case you missed it in my earlier post, here is an interesting quote from Tom's:

"GeForce3 does also contain the so-called 'hardwired T&L' to make it compatible to DirectX7 and also to save execution time in case a game does not require the services of the 'Vertex Shader'."

Note that he gives 2 reasons for the "old" T&L unit to be included with the GeForce 3:
1. Backwards compatibility, which is obvious.
2. to "save execution time" when the Vertex Shader isn't needed.

Since the Vertex Shader is hot new technology, we tend to also think of it as better/faster at everything. But it is programable, and programable processors are never as fast as their hardwired equivalents, just much more flexible. It is of course MUCH faster than doing those same special effects with the CPU, which was the only way to do them before the Vertex Shader came along. But I think the old T&L unit will still get a majority of the use. Even the most "eye-candy" filled scene I can picture would still be half T&L material.

Regards,
Warden
 
powervr2,
ok english is not my primary language not even my second language...
I did not know that English was not your primary language, and for that I apologize and retract some of my comments.

I am not retracting <i>all</i> of my comments about your English, because it is obvious that you are quite comfortable with the language. Your reading and writing seems to be rushed and sloppy more than anything else. You miss the points that others are making, and your own points are fuzzy at best simply because you don't spend the time to make them clear.
we were talking about a video card not about who talks better the english language...
When over and over and over you miss the clearly stated points that others are making, your English skills obviously come in to question.
are you telling that because I don't grasp very well the english the kyro card sucks ???
I am telling you that if you DID grasp English better, or more likely, just took more time to understand others posts, you would already KNOW that the Kyro card is not all you claim it is. Notice that I didn't say it sucks, because it doesn't.

I wish you would quit seeming to think that people that disagree with you are the "enemy." I, and I think most others, are just here discuss graphics cards so we can all learn something. Your forum habits, which I have already spent enough time pointing out, make this difficult.

Regards,
Warden
 
As long as any technology is from the same era or has been accepted (GF1, GF2 etc..), it is tied into the Unified API's like Directx very well. Games programmers don't have to worry about it very much. And also all of those things lye in the graphics card. So the difference between all of those will be handled mainly by the Direct 3D library, or the manufacturers open GL library. They don't individually optimised.

What i'm not too sure about is the vertex shaders. I'm not sure if a manufacturer will release its own instructions for any vertex shaders it uses in its next card, or if it all has to go through standard Direct X.


<i><b><font color=red>"2 is not equal to 3, not even for large values of 2"</font color=red></b></i>
 
Nobody is stating that the KYRO II is the perfect card. But it is not a bad card as well! Now lets talk about T&L...the so-called holygrail of 3D GPU's. To date there is not a game on the market that requires T&L. Also, I cannot see a T&L only game being released for a very long time. The retail market for games these days is bad enough and I do not believe developers will be cutting their own throats by releasing a T&L required game anytime soon. This can be said of Unreal II as well. But Unreal II in relality is a non-issue as it will not be released until next year. By that time, the KYRO (most likely will not be called KYRO), or rather PowerVR 4 board will be released and will include a T&L hardware unit. In fact, PowerVR already has a T&L hardware board on the market in the name of the Naomi 2 (the arcade board Sega is currently using), so the technology is available to ImgTec and STM. STM has also mentioned they are going to be meeting with Tim Sweeney to show him how T&L actually works on the PowerVR technology (he has mentioned that T&L would not work and was one of the reason he felt the tech is not good). So the technology is there. Why it is not on the KYRO II board....who knows, but it is mostly likely a cost factor...

Now, I also believe that T&L is a great feature and has a wonderful future, but presently it is not needed or required. Some of you may yell at the MDK2 benchmarks, but the KYRO II still plays this game extremly fast. Up the bandwidth requirements and the KYRO II will in fact become the fastest.

Lets take a look at Serious Sam, which also sports T&L support. From the results I have seen it looks as if the KYRO II is the card to have with Serious Sam, beating out the most of the GeForce 2 boards in benchmarks. Hmmm...does T&L help the GeForce 2 boards with this game?

Someone mention MBTR earlier....this game supports T&L. So I think this individual was probably making reference to the problems some reviewers have found with the KYRO II and this game. Well, I am sorry to say that the problem was not in the KYRO's fault but rather the game developer's fault. A recent patch from the developer have corrected all the problems the KYRO has had with this game (no internal reg settings needed now). In fact, this same patch is needed to get the GeForce 3 to run the game properly. You can look at this page to see how well the KYRO II actually does in MBTR at 32-bit color (ignore 16-bit color as 16-bit rendering on the KYRO is not the same as 16-bit rendering on any other board due to its use of internal true color).
http://www.rivastation.com/3dprophet4500-64mb_e.htm

Someone else mentioned Aquanox as well as a reason not to but the KYRO II. Well, from what I have seen so far I would not buy anything but a GeForce 3 to run this game. The individual who was stating running a GeForce 1 with Aquanox....I do not think so. Interesting to see with these benchmarks though the the KYRO II is able to keep pace with the MX and Radeon boards at 32-bit color. If this game was to be released today....who would buy it?? I know I wouldn't even though it had outstanding visuals. Anyone remember Nocturn and its unrealistic system requirements? I think today people are becoming more interested in the game simply because more people are able to play it.

Now and interesting point that people have not really touched upon in this thread is the idea of game complexity. Increase game complexity (in other words overdraw, increase texture layers etc...) and you in effect increase bandwidth requirements to play the game at a fast fps. With increased complexity which board(s) do you think will be hit the hardest? Well, you can take a look at Serious Sam and see the KYRO 1/2 are not hit very hard while the GeForce 2 boards take the hardest hit (AnandTech review). Another good way to see this is looking at how well a board performs in high res @ 32-bit color-depth with todays games (ie 1280X1024). If you look at the benchmarks you will see the KYRO II is least affected by increased bandwidth requirements...even with games that utilize T&L (remember, hardware only T&L games is a thing of the distant future). So if you want to talk about how future-proof a board is do not forget to include the factor of increasing game complexity and increased bandwidth requirements in the equation. T&L will do squat for you if your board is already memory bandwidth limited. Which is the main reason why 3Dfx was looking into HSR software support in their drivers....and the reason why the GeForce 3 includes HSR in software (Hmmm...some of you are fast to jump on the KYRO II for doing T&L in software but seem to praise the idea of HSR in software...which is nowhere near as fast of hardware HSR). This is probably why a small board with very little transistors with virtually the same specs of a TNT2 board is able to compete with the GeForce 2 boards on the market (minus the ultra). Lets increase game complexity and I really believe the KYRO II will only get stronger when compared with the GeForce 2 boards. I wonder if this is a reason why Hercules has dropped the MX boards from its low- to mid-level boards. Could this be also why rumours of a Creative Labs KYRO II board is floating aroudn the net (other OEM's cannot announce their boards yet as Hercules is given one month free promotion of thier board).

So, even though the KYRO II is not perfect, it is still good enough to warrant a serious look. Simply because it does not support hardware T&L does not mean the board is useless. It does support EMBM, has "working" S3TC support (unlike GeForce boards if I have read correctly - Quake 3 sky), extremely efficient design, excellent 16-bit image quality (only for those games that sport 16-bit only support), excellent 32-bit performance (hardly a drop between 32-bit and 16-bit performance), very good FSAA performance, etc....

So before ditching the board because of its lack of T&L, remember to look at all the factors affecting game performance, not just T&L.

Sorry for the long rant....
 
pvrrev,
Nobody is stating that the KYRO II is the perfect card.
Well, powervr2 seems to come across that way a lot... :wink: (A case in point is the title of this thread, "Kyro 2 the killer of nvidia ???" which is quite an exaggeration in my opinion.) But the rest of us, and powervr2 too on occasion, have admitted to both pros and cons of the Kyro II. I don't think anybody said it was worthless.

I do however think you are underrating the T&L issue. I am not sure how long you mean by "a very long time", but it's my opinion that we will see games come out within a year that will require hardware T&L. If you want to buy a card now <i>and</i> next year then go with the Kyro. But I like my cards to last a couple years, and I don't see the Kyro II effectively doing that without T&L.

Now as some others have pointed out, if the Kyro II had been released 6 to 8 months ago, when its technology could have carried it for longer, it would have been a much more viable purchase. Today, however, I don't recommend buying <i>anything</i> slower than a T&L enabled card on the level of a GeForce 2 GTS/Radeon. If you can save for a few more months and get a DX8 compliant card, all the better. I could be wrong, though... it has happened before. 😎

The Kyro II <i>does</i> represent some very interesting and different technology. The use of internal true color, tile based rendering, FSAA and lower bandwidth requirements are all potentially great features. I would like to point out that the Kyro's method of achieving these things is not the <i>only</i> way to do it, but again, their way looks to be competitively useful. If their next version includes T&L, it could definitely become a third major competitor. I like competition and I hope to see them develop such a product. I also like the fact that NVIDIA has forced the competition to speed up, but that doesn't mean I want them as the sole GPU supplier. :wink:

Regards,
Warden<P ID="edit"><FONT SIZE=-1><EM>Edited by Warden on 04/10/01 11:34 PM.</EM></FONT></P>
 
<i>"To date there is not a game on the market that requires T&L."</i>
Perhaps you mean exlusivly works with T&L. Otherwise you're very wrong.

<i>"Also, I cannot see a T&L only game being released for a very long time."</i>
Perhaps that is true, but the weight IS shifting from cpu optimizations to GPU optimizations.

<i>"The retail market for games these days is bad enough and I do not believe developers will be cutting their own throats by releasing a T&L required game anytime soon."</i>
The T&L use in games IS getting heavier.

<i>"By that time, the KYRO (most likely will not be called KYRO), or rather PowerVR 4 board will be released and will include a T&L hardware unit."</i>
Hmm... Conflicting Messages.

<i>"Why it (T&L) is not on the KYRO II board....who knows, but it is mostly likely a cost factor..."</i>
This was a intermediate release. This is what manufacturers do when they need more money for a project they are carrying out. They update a few parts of an existing product and re-release it. Don't get me wrong here, I'm not criticising anyone here, and sometimes it even helps the customers and creates new markets for the product. The Kyro 2 is basically a higher clocked Kyro.

<i>"If this game (Aquanox) was to be released today....who would buy it?? I know I wouldn't even though it had outstanding visuals."</i>
People with T&L cards for starters. And, any way I did not say this game is a reason not to buy the kyro 2. I just examplified it as a shape of things to come.

<i>"GeForce 3 includes HSR in software"</i>
Now where did you here that? The GeForce 3 has several features to reduce bandwidth required by games. For example, Lossless Z-Compression that allows a 4:1 compression, The <b>Hardware</b> HSR and "High order Surface" that allow splines to be sent to the GPU and all the tessellation is done by the GPU. It also uses several bandwidth enhancement techniques.

In this post you seem to come across rather bitter. Also name suggests you are here to defend the Powervr based card. I won't say anything about that other than it is quite suggestive and subjective. Now, you may say may pov isn't entirely objective either, and perhaps it will be true to a certain extent. Every one is affected by weltanschauung. But, I have specified my reasons why I believe the card will have difficulties being a success.

I do think if they can drop the price around 30-40%, it will be a decent choice of a card. Otherwise, unless you want to get another card in six months time, A Radeon or GeForce 2 Pro will be a better choice. They are after all in the same price range.


<i><b><font color=red>"2 is not equal to 3, not even for large values of 2"</font color=red></b></i>
 
<i>Lets take a look at Serious Sam, which also sports T&L support. From the results I have seen it looks as if the KYRO II is the card to have with Serious Sam, beating out the most of the GeForce 2 boards in benchmarks. Hmmm...does T&L help the GeForce 2 boards with this game?</i>

Try running the Kyro II on a Celeron 533. You'll then wish that you had HW T&L support.

=
<font color=green>Running Celery 400 since 1999.</font color=green>