Ok here is my latest stance on the whole Kyro II situation, and it is basically written to Teasy:
I changed my mind on something I said to you earlier. After reading Tom's article on the GeForce 3, I was under the impression that the Vertex Shader would only be used as needed for certain special effects, and that it would NOT replace hardwired T&L throughout a game. Well I now believe this to be wrong. Listening to you got me to look around more at this, and after reading several more reviews and some white papers on NVIDIA's website, I think that the GF3 Vertex Shader can, and is intended to, completely replace standard T&L in DX8 games.
I still don't see this as making today's hardwired T&L engines less important though. Think about it. We now have three T&L technologies:
1. CPU powered T&L
2. "Hardwired" T&L
3. "Vertex Shaders"
The game designers I quoted much earlier in this thread complained of the difficultly of making game engines run well on two completely different systems (software vs. hardware). I think it is a safe bet that many won't want to tackle three systems in one game. They can't drop <i> both </i> of the older technologies or they would eliminate 99% of their market. The question then is which one <i>will</i> be dropped. I think that software mode would be dropped for the obvious reason that it is the oldest and by far the slowest of the three technologies.
You have made arguments against this before and I will attempt to answer some of those now:
1.
<font color=blue> hardwired T&L will be fazed out for the simple reason that is is hardwired, at least CPU's are programable and as they get faster they can always be used for things like vertex shaders, thats something that can't be said for hardwired HW T&L. ...I'm sure in a year CPU's will be just as fast (if not faster) then the Geforce 2 HW T&L unit...</font color=blue>
Well first off you seem to think that "hardwired" is bad and that programmable is the wave of the future. I believe this to show a misconception about hardware and software. "Hardwired" is always faster than "programable," and is used in the technology world whenever possible for that reason. "Programable" is used only when more flexibility is needed, but it comes at the price of reduced speed. Programable features are now being added to graphics chips because the chips have finally matured enough to allow this while maintaining playable frame rates--but this does <i> not </i> mean that "programable" is faster than "hardwired." If all we needed was programmability, we would just scrap graphics cards and use the CPUs we already have. Instead, the reason that graphics accelerators were invented in the first place was because the highly programable architecture of a CPU could not render images fast enough. Therefor, specialized "hardwired" graphics cards came along that could rip a CPU to shreds--and this was back in the original Voodoo days when CPUs were quite advanced compared to graphics chips.
Jump ahead to today and graphics chips have met and surpassed the level of modern CPUs, and they are still specialized to do one thing: process graphical images. A large chunk of transistors in these GPUs are specialized just for doing T&L. Regular desktop CPUs are not going to be up to this kind of speed for a long time, and between now (well, a year from now) and then graphics cards without some kind of hardware T&L are going to suck. Even once CPUs are capable of GF2 T&L speeds, it will be far enough in the future that that kind of speed will be irrelevant, unless you only play five year old games.
2.
<font color=blue> No games can really not work on a non HW T&L card, if anyone was stupid enough to make a game that didn't allow SW T&L then a simple driver trick like geometry assist could be added and the CPU could use the games HW T&L engine..."</font color=blue>
At first I simply blew off this "driver trick" point as being baseless, but your later post about 3dfx successfully experimenting with this made me think twice. I would like a link or some other source of information where I could read up on how this worked. I must admit that I am still very dubious of the idea as it just doesn't make sense to me. But if you can hook me up with some good information about it I will certainly change my stance.
As for it being "stupid" to produce games with no software T&L, I completely disagree. When these games come out (which I am saying is within a year) hardware T&L will have been around more than enough time to be considered average, even "old." Every new technology eventually becomes required (unless it flops and disappears) and hardware T&L will have had 2.5 years. Remember, even though the average card sold now is a TNT2, that is not the average card purchased by 3D GAMERS. Even casual gamers have shown themselves willing to buy one generation ahead of the masses. Rather, I think it would be very stupid to drop support for hardware T&L.
3.
<font color=blue> The X-Box will be using vertex shaders too, and you think that in 1 year from now X-box game ported over to the PC won't add SW T&L support but will add hardwired T&L support? </font color=blue>
Yes this is exactly what I think they will do, and it's what I think DX8 PC developers will do too. This goes along with my point in the last paragraph about how this wouldn't be "stupid," and I have a couple of reasons I'll state:
Hardware T&L will still support the high number of polygons that the Vertex Shader will, which means designers won't have to redo all their models like they would for a software mode. Even if a lot of games start supporting a scalable polygon architecture like Sacrifice does, I don't think it will change things much. To have to design a game that could lose 90% of its polygons on some systems, yet still play the same, would be extremely annoying at best. As polygon counts get higher we will find them more and more comparable to screen resolution in their importance. Can you imagine trying to make a game that would still be playable if it lost 90% of its screen resolution? Simply losing the Vertex Shader effects would be much less of a design problem I would think.
Also, the CPU in the Xbox, and in T&L-required PC games, will be put to use doing other things. A couple of full-blown physics modeling programs are now being ported from the scientific community for use in games. Not all games will use these of course, but they are indicative of the direction games are taking. Developers are never content to leave CPU cycles sitting about unused, and they will fill them with all sorts of physics and AI calculations, etc. So designing these games for a software mode would not only require a steep reduction in polygons, but also in these other areas. I just don't see software T&L support lasting for long.
I can't, of course, end this without reprinting that quote from the conclusion of Tom's <A HREF="http://www.tomshardware.com/graphic/01q2/010419/geforce3-18.html" target="_new"> latest article</A> on the GeForce 3, since he says almost exactly what I have been saying:
<font color=red> Your current 3D-card will most certainly be able to run the 3D-games of the next 6-12 months just fine, especially if it has a GeForce, GeForce2 or Radeon based architecture and thus T&L. Without T&L you might be able to play today's games, but I doubt that any of the new game engines is going to appreciate 3D-cards without T&L anymore. Keep that in mind if you are considering a Kyro2 card. </font color=red>
Once again, I do not hate the Kyro cards by any means, and I am glad their technology is on the scene shaking things up. I just wouldn't recommend buying one until T&L (or better) is included.
Regards,
Warden
===========
The sum of the IQs on this planet is a constant--only the population is increasing...