Kyro 2 the killer of nvidia ???

Page 13 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Maybe this will help. Yes the Athlon bus is double pumped allowing a much greater bandwidth <b>potential</b>. I used <b>potential</b> because if the memory which is connected to the FSB is not double pumped then the whole bandwidth of the Athlon bus will not be used. Thus DDR which is doublepumped can use that increase bandwidth potential of the Athlon bus. Just like AGP where the bandwidth potential from 2x to 4x is doubled so is using DDR ram on a Athlon bus. Just because you have an ability to trasfer twice the amount of data doesn't mean it will be used or needed. In most applications even after doubling the data trasfer rate abiltiy between the cpu and video card it makes no or little improvement. Why because the applicaton doesn't need that increase bandwidth, at least yet. Same with DDR, twice the bandwidth but yet most applications don't seem to improve that much using DDR ram.

<P ID="edit"><FONT SIZE=-1><EM>Edited by noko on 04/19/01 06:14 PM.</EM></FONT></P>
 
Do you know of any applications that would demonstrate the extra bandwidth of AGP 4X?

I would have thought that something like 3DMark2001 would do this but I find little difference between AGP 2X and 4X. Come to think of it though, I never tested this at really low resolution. This would eliminate the video card as a bottleneck.

Maybe I have been performing the wrong kind of tests all along.

Thanks once again, Noko.
 
Look at Tom's disscussion of <b><font color=purple>"The Impact of the AGP-Speed"</b></font color=purple> which he did some time ago. It is still applicable today particulary in professional applications. Check it out at:
<A HREF="http://www6.tomshardware.com/mainboard/00q1/000214/index.html" target="_new">http://www6.tomshardware.com/mainboard/00q1/000214/index.html</A>

Very good article recommend read for everybody.

<P ID="edit"><FONT SIZE=-1><EM>Edited by noko on 04/19/01 11:21 PM.</EM></FONT></P>
 
Hardware T&L again.
<b><font color=purple>Without T&L you might be able to play today's games, but I doubt that any of the new game engines is going to appreciate 3D-cards without T&L anymore. Keep that in mind if you are considering a Kyro2 card.</font color=purple></b>
<A HREF="http://www.tomshardware.com/graphic/01q2/010419/geforce3-18.html" target="_new">http://www.tomshardware.com/graphic/01q2/010419/geforce3-18.html</A>
I found this to be true with benchmarking using 3dMark2001. One note: Serious Sam game engine is modern which does run well on the Kyro2 and wasn't design around a T&L card.


<P ID="edit"><FONT SIZE=-1><EM>Edited by noko on 04/20/01 00:05 AM.</EM></FONT></P>
 
phsstpok,
You are correct in thinking that the FSB is NOT the limiting factor in AGP 4x. You are also correct in saying that the FSB on your computer is faster since it is double-pumped. But like Noko pointed out, this doesn't necessarily help in real-world performance due to your RAM not matching that speed.

In all situations like this where you need to see what is "faster", you need to figure out the actual bandwidth--MHz alone don't tell the whole story. Use the formula I used in my last reply to calculate it:

width of data path in bytes <font color=red>x</font color=red> speed in MHz <font color=red>x</font color=red> bits per cycle <font color=blue>=</font color=blue> answer in MB/Sec

Note that most bus widths are given in bits so be sure to convert to bytes (bits/8=bytes). Also that if your speed is in MEGAhertz then your answer will be in MEGAbytes, GIGAhertz will be in GIGAbytes, etc.

Use this to calculate your FSB bandwidth:

8 x 106 x 2 = <font color=red>1696 MB/Sec</font color=red> Much faster than AGP 4x.

Now as to why AGP 4x seems no faster than 2x, the link to Tom's article that Noko gave you is the most complete answer I know of. I will also make a few notes of my own that will hopefully make sense. :wink:

AGP speed is only going to really impact games IF the bus is completely saturated with data. Unfortunately even AGP 4x is so slow (compared to local graphics memory) that nobody wants to make a game that relies on it. About the only way I know of to really saturate the AGP bus is to force a game to use AGP texturing. Consider the Quake III situation about a year ago when the 64MB GeForce2 was just coming out but texture compression had not yet been enabled in NVIDIA drivers. A good CPU with a 32MB GF2 card could play QIII at 1024x768x32 at a good frame rate of 60+. So could a 64MB GF2. They scored virtually the same.

The difference happened when you tried the QIII Quaver demo. This demo has a TON of textures, and the 32MB card couldn't hold all of them which made it have to constantly swap textures over the AGP bus. Though the 64MB card still produced the same score of 60+ FPS, the 32MB card dropped to about 15 FPS. OUCH!

See the bandwidth of the local graphics memory on the plain GF2 is 5856 MB/Sec, 455% faster than AGP 4x. Therefor game designers obviously prefer to design their games to fit in the local graphics memory, greatly reducing the reliance on AGP speed.

Now using a 32MB card (without texture compression) and the Quaver benchmark, you could probably see the difference between AGP 4x and 2x. But the difference would be like 10 FPS compared to 15 FPS, so who cares? :smile:

This is my basic take on the "why" of the situation. It is a little simplified, but this post is long enough already!

Regards,
Warden

===========
The sum of the IQs on this planet is a constant--only the population is increasing...
 
Ok here is my latest stance on the whole Kyro II situation, and it is basically written to Teasy:

I changed my mind on something I said to you earlier. After reading Tom's article on the GeForce 3, I was under the impression that the Vertex Shader would only be used as needed for certain special effects, and that it would NOT replace hardwired T&L throughout a game. Well I now believe this to be wrong. Listening to you got me to look around more at this, and after reading several more reviews and some white papers on NVIDIA's website, I think that the GF3 Vertex Shader can, and is intended to, completely replace standard T&L in DX8 games.

I still don't see this as making today's hardwired T&L engines less important though. Think about it. We now have three T&L technologies:

1. CPU powered T&L
2. "Hardwired" T&L
3. "Vertex Shaders"

The game designers I quoted much earlier in this thread complained of the difficultly of making game engines run well on two completely different systems (software vs. hardware). I think it is a safe bet that many won't want to tackle three systems in one game. They can't drop <i> both </i> of the older technologies or they would eliminate 99% of their market. The question then is which one <i>will</i> be dropped. I think that software mode would be dropped for the obvious reason that it is the oldest and by far the slowest of the three technologies.

You have made arguments against this before and I will attempt to answer some of those now:


1.
<font color=blue> hardwired T&L will be fazed out for the simple reason that is is hardwired, at least CPU's are programable and as they get faster they can always be used for things like vertex shaders, thats something that can't be said for hardwired HW T&L. ...I'm sure in a year CPU's will be just as fast (if not faster) then the Geforce 2 HW T&L unit...</font color=blue>
Well first off you seem to think that "hardwired" is bad and that programmable is the wave of the future. I believe this to show a misconception about hardware and software. "Hardwired" is always faster than "programable," and is used in the technology world whenever possible for that reason. "Programable" is used only when more flexibility is needed, but it comes at the price of reduced speed. Programable features are now being added to graphics chips because the chips have finally matured enough to allow this while maintaining playable frame rates--but this does <i> not </i> mean that "programable" is faster than "hardwired." If all we needed was programmability, we would just scrap graphics cards and use the CPUs we already have. Instead, the reason that graphics accelerators were invented in the first place was because the highly programable architecture of a CPU could not render images fast enough. Therefor, specialized "hardwired" graphics cards came along that could rip a CPU to shreds--and this was back in the original Voodoo days when CPUs were quite advanced compared to graphics chips.

Jump ahead to today and graphics chips have met and surpassed the level of modern CPUs, and they are still specialized to do one thing: process graphical images. A large chunk of transistors in these GPUs are specialized just for doing T&L. Regular desktop CPUs are not going to be up to this kind of speed for a long time, and between now (well, a year from now) and then graphics cards without some kind of hardware T&L are going to suck. Even once CPUs are capable of GF2 T&L speeds, it will be far enough in the future that that kind of speed will be irrelevant, unless you only play five year old games.


2.
<font color=blue> No games can really not work on a non HW T&L card, if anyone was stupid enough to make a game that didn't allow SW T&L then a simple driver trick like geometry assist could be added and the CPU could use the games HW T&L engine..."</font color=blue>
At first I simply blew off this "driver trick" point as being baseless, but your later post about 3dfx successfully experimenting with this made me think twice. I would like a link or some other source of information where I could read up on how this worked. I must admit that I am still very dubious of the idea as it just doesn't make sense to me. But if you can hook me up with some good information about it I will certainly change my stance.

As for it being "stupid" to produce games with no software T&L, I completely disagree. When these games come out (which I am saying is within a year) hardware T&L will have been around more than enough time to be considered average, even "old." Every new technology eventually becomes required (unless it flops and disappears) and hardware T&L will have had 2.5 years. Remember, even though the average card sold now is a TNT2, that is not the average card purchased by 3D GAMERS. Even casual gamers have shown themselves willing to buy one generation ahead of the masses. Rather, I think it would be very stupid to drop support for hardware T&L.


3.
<font color=blue> The X-Box will be using vertex shaders too, and you think that in 1 year from now X-box game ported over to the PC won't add SW T&L support but will add hardwired T&L support? </font color=blue>
Yes this is exactly what I think they will do, and it's what I think DX8 PC developers will do too. This goes along with my point in the last paragraph about how this wouldn't be "stupid," and I have a couple of reasons I'll state:

Hardware T&L will still support the high number of polygons that the Vertex Shader will, which means designers won't have to redo all their models like they would for a software mode. Even if a lot of games start supporting a scalable polygon architecture like Sacrifice does, I don't think it will change things much. To have to design a game that could lose 90% of its polygons on some systems, yet still play the same, would be extremely annoying at best. As polygon counts get higher we will find them more and more comparable to screen resolution in their importance. Can you imagine trying to make a game that would still be playable if it lost 90% of its screen resolution? Simply losing the Vertex Shader effects would be much less of a design problem I would think.

Also, the CPU in the Xbox, and in T&L-required PC games, will be put to use doing other things. A couple of full-blown physics modeling programs are now being ported from the scientific community for use in games. Not all games will use these of course, but they are indicative of the direction games are taking. Developers are never content to leave CPU cycles sitting about unused, and they will fill them with all sorts of physics and AI calculations, etc. So designing these games for a software mode would not only require a steep reduction in polygons, but also in these other areas. I just don't see software T&L support lasting for long.


I can't, of course, end this without reprinting that quote from the conclusion of Tom's <A HREF="http://www.tomshardware.com/graphic/01q2/010419/geforce3-18.html" target="_new"> latest article</A> on the GeForce 3, since he says almost exactly what I have been saying:
<font color=red> Your current 3D-card will most certainly be able to run the 3D-games of the next 6-12 months just fine, especially if it has a GeForce, GeForce2 or Radeon based architecture and thus T&L. Without T&L you might be able to play today's games, but I doubt that any of the new game engines is going to appreciate 3D-cards without T&L anymore. Keep that in mind if you are considering a Kyro2 card. </font color=red>
Once again, I do not hate the Kyro cards by any means, and I am glad their technology is on the scene shaking things up. I just wouldn't recommend buying one until T&L (or better) is included.

Regards,
Warden

===========
The sum of the IQs on this planet is a constant--only the population is increasing...
 
Hey, I rushed here to quote the last para from toms GF3 benchmark, but it seems like two people already beat me to it.

I agree with you on the Fixed (Hardwired) vs. Programmable issue. The CPU is perfect for programmability. There is, however, a compromise going on these days with vector simd. All the 'modern' CPUs have them. The Vertex shader is more comparable to this.

I don't think it will completely replace the Fixed GPU in the Lifetime of the GF3 or any of the next 2 GPU's that follow (We can only imagine what features anything after that will contain). I believe the vertex shader will be used in combination with the Fixed T&L. this will give the programmers the ability to use the speed in the standard environment and the flexibility in the eyecandy. When used properly, I believe the two will complement each other very well. The use can be similar to the combined use of the General CPU instructions with the SIMD instructions.


<font color=red>"My name is Ozymandias, King of Kings:
Look on my works, ye Mighty, and dispair!"</font color=red>
 
that was already answered by Teasy

"Yeah there is currently a problem with the Kyro II and DX8 games, it isn't a problem with the Kyro II hardware or drivers but actually a problem with DX8, the first official release of DX8 wrongly has information in it that the Kyro II can't render into a texture, this is used quite a bit in games, its used allot in 3dmark 2001, for a DX7 game its fine because DX7 knows the Kyro II can render into a texture but when playing a game that uses DX8 the Kyro II has to use the CPU for things like shadows (I think thats something that rendering into a texture is used for), "

that will penalize the kyro 2 score on 3dmark 2001...

but even then with a good cpu geforce 2 mx loses against (higher depths) kyro 2...

if Dronez is the example of future games then I will have a kyro 2 to at least 12 more months (or even more)
allmoust 60 fps with a low cpu (750 mhz) at 1024x768 is not bad at all...
I am sure that geforce 2 MX isn't able to do that on that pixel shader game...
The cheapiest Geforce Mx that I can buy here in portugal (or even in europe) is about 150 us $ ...
...
kyro 2 :
for todays games it will rock !
for tomorrows games I will buy a new card...
(maybe not 😉 )

about that reference to kyro 2 :
if kyro is so bad why he don't test it?
toms do a kyro review (in english) if the results are good or not you must show to us the truth !!!
don't hide the TRUTH !!!
Are you waiting for a newer game that stinks on kyro2 to bench kyro 2 ???
maybe you must wait more than a year !!
there are many TNT-1,2,matrox,etc.. out there more than 80 % of the market...

and the geforce mx with those bandwidth issues only count at bellow 800x600 with 32 bits that is not good for me... I bought a 19 inches monitor so I want higher depths...
😉
kyro could be the amd of the graphics cards lowering the prices to where they should be !! low !!
if and only if we suport kyro 2 ..
if not then we must save 1000 US $ for the next geforce 4...
or get a geforce 2 mx (it's a bad product bad )
ATI is implementing allmoust the same prices as nvidia, so ati don't count...


<P ID="edit"><FONT SIZE=-1><EM>Edited by powervr2 on 04/20/01 10:54 AM.</EM></FONT></P>
 
Your current 3D-card will most certainly be able to run the 3D-games of the next 6-12 months just fine, especially if it has a GeForce, GeForce2 or Radeon based architecture and thus T&L. Without T&L you might be able to play today's games, but I doubt that any of the new game engines is going to appreciate 3D-cards without T&L anymore. Keep that in mind if you are considering a Kyro2 card.
True to a point, but not applicable to the market the KYRO II is fighting against. Lets talk about new engines shall we:

Serious Engine = KYRO plays very well
Black & White engine = KYRO plays extremely well
Tribes 2 = excellent performance
Blade of Darkness engine = outstanding performance

Upcoming games:

Duke Forever: KYRO just loves the UT engine
Any game built using UT engine, Quake 3 engine, LithTech 2 engine, etc...
Soldier of Fortune 2 - Quake 3 engine

Now the question becomes.... What is the better deal? A KYRO 1/2 board or an NVIDIA MX/GTS board? You can talk about T&L all you want with the MX board and how it will be better off with future games since their engines will support hardware T&L. But lets not forget to include increased depth complexity with these titles as well, thus stressing these memory bandwidth limited boards even further. T&L will do squat if the board is already stressed due to bandwidth limitations. So far the KYRO II has shown to be a very good performer in games that use T&L and I cannot see this changing in the near future.

So Tom's statement at the end of the GeForce 3 article is nothing more than an uncalled for crack at the KYRO II board. Based solely on personal feelings and not fact. Can Tom predict how much T&L will be a factor in games over the next year? Can Tom actually promote a MX board over a KYRO II board knowing the MX board is bandwidth stressed that will only become more apparant over the next year? Now that NVIDIA has released the MX 200/400 boards can you honestly say that developers will be coding their games outside the capabilities of these boards? I highly doubt it as NVIDIA wants these boards to become OEM favorities. So if you are going to believe statements such as the one Tom made at the end of the GeForce 3 article without any basis in fact you must be wearing a bib as Tom is spoon-feeding you info that you are goobling up without question.

BTW, this KYRO II comment at the end of the article was uncalled for and is just a blatant attack on the KYRO II board. The KYRO II was not used as a reference in the review at all yet the reviewer had the gull to make such a statement. I have to wonder how much influence NVIDIA is making on this site and with such statements without proof of any kind leads me to believe that reviews done here may be a bit biased towards NVIDIA.

Which btw supports the feelings I have read about this site on many other message boards around the net. A lot of people have lost respect for Tom's reviews as they feel this site is sitting in NVIDIA's bar room.
 
Yeah !!!

toms was sold to nvidia the only interesting thing here is this topic...

Kyro 2 deserves a review after he atacked it !!!!
if those future games aren't out yet... then we will not see a review so soon...
because HE WANT TO GIVE US A BAD PICTURE OF IT THE PROBLEM IS HE CAN'T DO THAT...
maybe not for 12 more months...
I only return to tomshardware.com again after his future kyro 2 review...
well he atacked kyro 2 WITHOUT FACTS !!!
GIVE US THE FACTS !!!

Tom said:
"I personally recommend GeForce3 to all the ones of you who are really able to appreciate the new effects that GeForce3 can provide. "

In what game ? I am living in the present now...
<P ID="edit"><FONT SIZE=-1><EM>Edited by powervr2 on 04/20/01 11:12 AM.</EM></FONT></P>
 
Your last post helped clarify a lot of things. I think I understand. Thanks.

It bothered me that the classic explanation for AGP 4X not seeming to be any faster AGP 2X was that applications do not fully support AGP 4X. I have been hearing this for several months. I thought it strange that applications wouldn't use it if AGP 4X was indeed faster. Now, thanks to you, I understand that games don't rely on AGP because even at 4X, AGP is just too slow.

It took a lot of drumming but I finally got it.

Thanks again.

"My brain hurts" - Monty Python
 
I can only hope Tom is putting off a kyro-II review until the dx8 bug you mention is straightened out and he can give it a fair reveiw. I too took execption to Tom's closing remarks and considered a cheap shot off the cuff with nothing to back it up. I have defended Tom's reviews attimatly before but even this begins to make me wonder......Maybe that was one of Nvidia's stipulations "Tom we will give you a gforce3 to review but you have to say somenthing negative about the kyro-II"
At any rate is rather obvious by the length of this thread as ell as others that his readers rteally want a review on the Kyro-II, so what is the holdup? Instead he gives us articles about streamers and the Lan-plus?

A little bit of knowledge is a dangerous thing!
 
"kyro could be the amd of the graphics cards lowering the prices to where they should be !! low !!
if and only if we suport kyro 2 .."

One thing must be said here, If Kyro is the amd of graphics cards then they are in the k6-2/3 phaze. They really need to get there product to market when it is commercially viable for them. They seem to be a bit off in this department. this marvelous technology is not going to survive unless the techs at Kyro can kick it in the butt a bit and get there next revision to market when it still has a chance. How many people are going to buy a video card just for the sole purpose of supporting a company? They may be better off asking for donations if this were the case.

A little bit of knowledge is a dangerous thing!
 
Warden, I don't have a link for the Geometry assist thing but I could find one if you really want one. You mean you really haven't heard of 3dfx's geometry assist?, it was quite a big thing when it was first included in public Voodoo5 drivers.

I'm not going to reply to everything you said point by point because I feel we've both made our points clear to each other (at least I hope I have made my point clear and you have certainly made your point clear to me), I suppose we'll have to see how this unfolds in future games though. One thing though, did you have a look at those DroneZ numbers?, that game is one of the first true DX8 games with vertex and pixel shaders and won't even be out for quite sometime so its not fully optimised for speed and yet the Kyro II runs it easily, 1024x768x32 at 60fps on a 700mhz P3, and 35fps at 1024x768x32 with 4xFSAA, so far Dx8 games don't seem to big of a problem with the Kyro II, do you think a MX or even a GTS would get 35fps at 1024x768x32 with 4xFSAA?, I don't, the MX certainly wouldn't even come close.


This is just a general comment to everyone:

For the next year the Kyro II (just like the GTS) will be a good card that will play anything well at high res, the MX won't be able to do this, it may have a T&L unit but future games are still going to use more fillrate and memory bandwidth and HW T&L can't save you if you run out of either one of those. Also I'll add one more point, what is a card with a powerful T&L unit if it can't get rid of overdraw?, think about it you can add detail to the small amount of objects you have on screen in current games but you can never use that T&L power to make games more like real life by adding more objects, why because while the card might find the extra T&L processing easy but it won't be able to handle the overdraw that all the objects create, just think what a Kyro II or PowerVR 4 could do with a powerful hw T&L unit, rather then just adding poly detail to the spars games we already have you could have a game with massive rich worlds full of villages with hundreds of people walking around and fully interactive houses full of all there own objects like fridges and cuboards and tv's video's and people, and huge forests with thousands of tree's, what I'm trying to say is HW T&L will always be limited until its bonded with a tiler and a tiler will always be limited until its combined with HW T&L, up poly counts to much and your very likely upping overdraw, up overdraw to much and you upping the poly count, tilers and hw T&L are a dream combination and they need each other if games are every going to get truely obsorbing, how can anyone be obsorbed in a game with houses you can't go into or if you do go in there empty or have 1 table in them?, its pathetic and its all done to save the traditional card from being stupid and rendering stuff that isn't seen, so IMO we need to get rid out these stupid traditionals. My opinion on the Kyro II is it isn't perfect that much is obvious, it will start to need HW T&L at some point, but it won't run out of fillrate and memory bandwidth as quickly as the GTS and MX will so there's a tradeoff with both cards. On the future of PowerVR, just wait till NP2 or NP3 the next cards from PowerVR, if some of you are thinking IMGTEC will bring out another Kyro based card maybe with a standard HW T&L unit bolted on, then your wrong thats all I'll say.
 
"My opinion on the Kyro II is it isn't perfect that much is obvious, it will start to need HW T&L at some point, but it won't run out of fillrate and memory bandwidth as quickly as the GTS and MX will so there's a tradeoff with both cards. On the future of PowerVR, just wait till NP2 or NP3 the next cards from PowerVR, if some of you are thinking IMGTEC will bring out another Kyro based card maybe with a standard HW T&L unit bolted on, then your wrong thats all I'll say."

This sounds great, I would love to see it. But the milllion dollar question is when? Will we see such a card after say the gforce 6? When programmers have already adopted whatever programming is need to write games to nvidia's style of rendering? They really need to get there product to market faster if they hope to survive.


A little bit of knowledge is a dangerous thing!
 
I agree with you in the technology being in an early phase, but I think it is more the managers' fault. They have never been too good at completing projects on time. From the first of their products (when they were videologic), the powervr, The had scheduling problems. they had to release a product early because of the voodoo chip due from 3DFX. They then released the completed chip named the powervr pcx2, which still couldn't do much against the voodoo.

The less said about the Powervr series 2 (neon) the better. The kyro, after a long time in development, was supposed to be the saviour. Now, here we are with the Kyro 2, perhaps what the Kyro 1 was supposed to be, half a year ago.

Each and every time, the release was far later than intended. They Just can't manage their development, and divide their resources evenly. If they managed everything right, they probably would have had a T&L card out right now. They probably would have made great competition against nVidia and ATI. But, None of that happened, and here we are with the Kyro 2.


<font color=red>"My name is Ozymandias, King of Kings:
Look on my works, ye Mighty, and dispair!"</font color=red>
 
The sad part of it is I love the technology. I think it very well could be the best route to follow. Unfortunatly it would not be the first time that a superior Idea went down the tubes due to the fact it was a day late and a dollar short. Not many people are going to buy a card merely for the fact they want to see the company survive ( although I will). It very well could be that they just don't have the resources available to them to produce these cards in a timely fashion. Maybe they should consider getting some outside help before it is to late and Nvidia rules the world. Maybe someone like Matrox who at least has name brand recognition. We really do need another player in this market.

A little bit of knowledge is a dangerous thing!
 
Come on, Tom just mention something to consider dealing with the KyroII lack of T&L. It wasn't a cheap shot or crack as far as I see it. New games engines will use T&L more, as in Max Payne and Halo. Its not the end of world as far as I see it just a simple awareness statement. I don't think all the game engines out there will convert over night to hardware T&L or even in one year but the process has started and more games will need hardware T&L more as time goes on. Saying Tom was paid off by Nvidia when you really don't know is a cheap crack. Plus Tom probably did do a complete investigation of the KyroII so the remark given may very well be an accurate educated conclusion. If anyone really doubts Tom's integerity then move on to another board is my recommendation.
 
Whether or not it was intended as a cheap shot is one thing while it being percieved as a cheap shot is another. I myself found the comment suspect. At least coupled with the fact he had absolutly no data to back it up. Perhaps soon he will. I think it would have been a fair statement to add to a kyro-II review if such findings were conclusive in his test. However, to stick a mention of that at the end of a gforce 3 review, one in which he had absolutly no tests results or even mention of the kyro-II anywhere to be found except in his closing statements was out of place and rather odd.

A little bit of knowledge is a dangerous thing!
 
R they really having financial difficulties? If the bigshots (nVidia and ATI) smell the walking wounded, you know that they'll move in for the kill. If there should be a takeover, I hope it would be ATI. nVidia already have tiler technology, as well as more cool technology from 3DFX. ATI could do with some extra Tech such as their own tiler.

Of course it still would be nicer to see Imagination Tech hold out until they can release a product capable of standing against the best cards.


<font color=red>"My name is Ozymandias, King of Kings:
Look on my works, ye Mighty, and dispair!"</font color=red>
 
<<<<<<<Each and every time, the release was far later than intended. They Just can't manage their development, and divide their resources evenly. If they managed everything right, they probably would have had a T&L card out right now. They probably would have made great competition against nVidia and ATI. But, None of that happened, and here we are with the Kyro 2.>>>>>>

Thats not totally true, IMGTEC are now getting faster with every release, remember that the company that made all the chips before the Kyro was a totally different company to the one producing them now, back then it was NEC, yes the company designing the chips is the same (IMGTEC) but they don't decide which chip will come out and when, all they do is show NEC there designes and NEC would then decide if they want a budget design or push for a top of the range design and when the product will be released, then NEC buy the rights for the design they like and make it, now ST Micro are in NEC's place so you can't base IMGTEC's cards on anything before Kyro, with Kyro II all that happened was ST Micro (already owning the rights to Kyro) simply put the Kyro on a 0.18 process and got IMGTEC to refine the chip to get the speed increase, so it wasn't IMGTECS decision to release a Kyro II or not to include HW T&L it was ST's decision, if ST had thought a Kyro II with HW T&L would have been worth the extra cost they'd simply pay IMGTEC for the rights to there HW T&L unit and get IMGTEC to put that on the Kyro II for them, obviously they didn't want to pay IMGTEC for it in this generation, they just wanted to release a cheap part.

The Neon250 was never released to be competative anywhere but in the U.K because they were then partnered with NEC and at that time they couldn't launch much of an attack on the PC market also they didn't want to because they'd rather be making DC chips. Now there partnered with ST Micro one of the biggest chip makers in the world and there's no console in the way, just look at how long it took for IMGTEC to bring out the Kyro after the Noen250 (a long long time) and how long it took ST to bring out the Kyro II after the Kyro (not very long), there speeding up now that they really want to attack the graphics card market, they'll have there next product out in no more then 4-5 months and it will not be a Kyro II with HW T&L, it'll be much much more, I can't say anymore though.

EDIT:, I just want to add to that that IMGTEC also get royalties for every chip sold as well as money up front from ST.

<P ID="edit"><FONT SIZE=-1><EM>Edited by Teasy on 04/20/01 05:02 PM.</EM></FONT></P>
 
"they'll have there next product out in no more then 4-5 months and it will not be a Kyro II with HW T&L, it'll be much much more, I can't say anymore though."

But even with this if correct nvidia will have a gforce 4 out remember they are on a 6 moth cycle. And we are still looking at a month before the kyro-II is released, or at least commercially availble. I am not slamming the Kyro here I like it but even with what you say this is not fast enough.


A little bit of knowledge is a dangerous thing!
 
That was merely speculation on my part. Partly do to the fact that there previous cards have not been a commercial success and do to the fact they do seem to be behind in developement. This may hint at a money/resource problem though it may not be the case at all. Bottom line is they will need to make money sooner or later, can anyone shed some light in those regards? How is there bottom line? If you scroll up some I kinda hinted at this earlier. I specualted that at present there solution may be a great idea for the integrated video market. Limited bandwith requirements relatively cheap cost all seem to add up to a very viable product for this market. This would allow for further developement into future KYRO products. Albeit not very glamorous it is profitable. Expecially seeing how bandwith limitations of current onboard video soluitons really degrade performance, maybe an onboard kyro solution would be a real improvement here. Any insights into this anyone?

A little bit of knowledge is a dangerous thing!
 
There in no financial trouble at all and there's no way they could be in financial trouble with the PowerVR cards, Imagination Technologies are a company that make sound cards, speakers, digital radio's, graphics cards and of course they also design graphics architectures (PowerVR), the architectures are sold to ST Microelectronics (5th biggest chip producer in the world) who then make chips and sell them to board makers (one of those board makers it Videologic who are part of IMGTEC) and IMGTEC also get further royalties for every chip sold, IMGTEC are no 3DFX, there far to stable with all there other interests to go down even if PowerVR cards totally failed because they only design the chips, they don't spend money making the actual chips, but obviously if they did fail to much then maybe nobody would want to make there chips anymore, for a more detailed description look in that beyond3d kyro II preview I posted, it explains all the companies involved.