Kyro 2 the killer of nvidia ???

Page 10 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
G

Guest

Guest
Don't you think that Max Payne will be looking to work at more like 53fps at 1024x768x32 on a Radeon DDR? or maybe around 40fps but not 25fps and even only 29fps at 640x480 (and thats with the HW T&L enabled), the game won't sell if people with Radeon DDR's need to use 640x480x32 just to play almost playably, I'd guess the game will be more like a medium detail version of those low and high detail tests in 3dmark 2001, also that high detail test doesn't just add more polys, It adds dot3 to the floor which takes allot of fillrate but isn't at all related to the CPU, in the high detail tests at 1024x768x32 both your HW T&L and SW T&L scores only dropped by about 3fps when taking your CPU speed down 200mhz, this tells me that high detail in Max Payne is not just about more polys but its also much more fillrate limited then the low detail tests, so perhaps the Kyro II with its great high res performance would do better then we all think.

I really want to post something now but I can't, I have thorough benchmark numbers of DroneZ (a DX8 vertex shader game that uses 4 texture layers) on the Kyro II and a P3 700, this games is one of those Geforce 3 targeted games that also uses pixel shaders (although the pixel shader effects just don't work on anything but a Geforce 3), the benchmarks are from the rolling demo which is only available to the press, I can't post them now because there from a Kyro II review thats not yet released and I wouldn't want the numbers getting on any news sites or anything, hopefully in a day or 2 the review will be posted and then I can post them here, all I'll say is there very interesting considering some people think the Kyro II won't handle DX8 games very well.
 

noko

Distinguished
Jan 22, 2001
2,414
1
19,785
There are alot of interesting things going on here which I am glad you noticed. The particle physics involved with calculating the massive effects from the various gun fire does not change from one resolution to the other. <i> Causing a relatively flat frame rate for SW T&L from 640x480 to 1024x768. Then the bandwidth problems of the Radeon come into play at higher resolutions.</i> Meaning we have a constant in this game which is unique. So this game is heavily cpu dependent on those effects. Control those effects in the game (less bullets, smoke, flying chucks of plaster) and the cpu is freed up. Faster FPS. The HW T&L just allows the cpu to do more calculations on the physics resulting in the speedup you see. When I upgrade my cpu to 1.33mhz (overclocked to who knows how high) the FPS should be much higher. I am sure the game will offer sliders to adjust the detail on effects, meaning a playable framerate will be had. Now AnAndTech benchmark of the KyroII in 3dMark2001 with a 1ghz T-Bird showed a very poor performance which this game test was a part of. If that test is accurate then the KyroII may have some serious problems with this DX8 game. Still KyroII drivers could change dramatically and get DX8 results up.

<P ID="edit"><FONT SIZE=-1><EM>Edited by noko on 04/15/01 10:02 PM.</EM></FONT></P>
 
G

Guest

Guest
Yeah there is currently a problem with the Kyro II and DX8 games, it isn't a problem with the Kyro II hardware or drivers but actually a problem with DX8, the first official release of DX8 wrongly has information in it that the Kyro II can't render into a texture, this is used quite a bit in games, its used allot in 3dmark 2001, for a DX7 game its fine because DX7 knows the Kyro II can render into a texture but when playing a game that uses DX8 the Kyro II has to use the CPU for things like shadows (I think thats something that rendering into a texture is used for), with any other card the shadows will be done in hardware, IMGTEC have contacted microsoft and they have said they will fix this in the newest DX8 release, so when this happens the system CPU will be freed up from doing the operations for stuff like shadows in DX8 games and the Kyro II's CPU dependency in DX8 games will be allot less, not that the CPU dependency is that bad at the moment, which is something I'll show when I post those DroneZ benchmarks.

<P ID="edit"><FONT SIZE=-1><EM>Edited by Teasy on 04/15/01 07:44 PM.</EM></FONT></P>
 

noko

Distinguished
Jan 22, 2001
2,414
1
19,785
That would be great, hopefully before the KyroII is available MicroSoft will have the fix available dealing with DX8. As soon as I can I will be doing those FSAA which will probably shed real light on the poor FSAA performance of a Radeon.

<P ID="edit"><FONT SIZE=-1><EM>Edited by noko on 04/15/01 10:00 PM.</EM></FONT></P>
 

noko

Distinguished
Jan 22, 2001
2,414
1
19,785
This is another test I performed comparing the two rendering options of HW and SW T&L.
This time I monitored cpu usage during the test.
<b>Tests</b>

3dMark2001 High Polygon Count, T-Bird at 864mhz, AGP4x, W2k

<b>1024x768x32</b>

. . . . <b><font color=blue>High Polygon Count(1 light)</b></font color=blue>(Million Triangles)/sec MT/s
. . . . . . HW T&L 4.8MT/s 100%cpu
. . . . . . SW T&L 3.1MT/s 100%cpu
. . . . . . Note: Amazing 55% increase in polygon count using HW T&L

. . . . <b><font color=blue>High Polygon Count(8 light)</b></font color=blue>
. . . . . . HW T&L 2.0MT/s 100%cpu
. . . . . . SW T&L 1.5MT/s 100%cpu
. . . . . . Note: 33% increase in polygon count using HW T&L

Now look what happens at:

<b>1600x1200x32</b>

. . . . <b><font color=blue>High Polygon Count(1 light)</b></font color=blue>
. . . . . . HW T&L 4.6MT/s 100%cpu
. . . . . . SW T&L 3.2MT/s 100%cpu
. . . . . . Note: 44% increase in polygon count using HW T&L. The SW T&L count
. . . . . . is virtually the same.

. . . . <b><font color=blue>High Polygon Count(8 light)</b></font color=blue>
. . . . . . HW T&L 2.0MT/s 100%cpu
. . . . . . SW T&L 1.5MT/s 100%cpu
. . . . . . Note: 33% increase in polygon count using HW T&L. The same as at
. . . . . . a lower resolution.

<b>Conclussion:</b>
This test indicates a CPU bottleneck in which once again the HW T&L allows a significant increase in polygon rendering by freeing up the cpu. Much less memory bandwidth degradation occurs in this test compared to Game 3 between the two resolutions. This would indicate polygon rendering ability like in a 3d modeller (3dsmax, TrueSpace, Maya etc.) where even at high resolutions the HW T&L can significantly increase performance. (Note: m_Kelder, usually the GF2 cards have a higher polygon rendering ability here, which you can check at MadOnion to confirm. Good reason to get a GF2 card for 3dsmax modelling by looking at this polygon test.)
 
G

Guest

Guest
yeah, when that fake cd showing hardware t&l for the kyroII thing came about I was like sweet! Even a simple sphere uses 100% cpu to rotate around. Definately need a T&L card.
 
G

Guest

Guest
Have you ever read a benchmark where a game is tried with hardware T&L and then with it run using D3D HAL? After about 600 MHz the hardware T&L actually slows the fps down (with GeForce 256) and running D3D is faster.
http://www.hardocp.com/articles/nvidia_stuff/t&l/t&l-lights.html

http://www.hardocp.com/articles/nvidia_stuff/t&l/t&l-lights_day2.html

http://www.hardocp.com/articles/nvidia_stuff/t&l/t&l-lights_day3.html

http://www.hardocp.com/articles/nvidia_stuff/t&l/t&l-shine_pg1.html

I have a GeForce 2 Pro and these benchmarks are interesting.
 
G

Guest

Guest
I take this from the links that you gave to us:

"Now what I am wondering here is this. nVidia has totally hyped the T&L for the GeForce card, but it seems to me that the T&L could possibly be the downfall of the card WHEN AND IF it is utilized with newer games to be announced, if they in fact use more than one light"

----
"What we have basically looked at in the past articles is the fact that Hardware T&L (transform and lighting) done on the GeForce card does not really do us much good in realworld games on 550MHz+ Intel PentiumIII processors. We have also discussed the fact that using some benchmarks shows the polygons, that the card can draw, decrease substantially under a load of more than one light source."
 
G

Guest

Guest
Yeah in apps when fillrate and memory bandwidth isn't a problem at all HW T&L is a total winner and an essential feature, but I really see the Kyro II as a budget card that brings top level performance to people who don't want to spend hundreds of pounds on a graphics card, so if apps like Maya are very important then a cheap HW T&L card like the Radeon LE or Geforce MX would be the best choice, but if games are important then there's no competition, certainly none in the U.K, should I buy a Geforce MX for £80 or should I buy a Kyro II with TV-Out for £85?, as I said, no competition, unless of course the person making the decision wants to play his games at 640x480 or 800x600 were the MX's horrible bandwidth problems aren't as bad and so HW T&L can actually make it faster, but I certainly wouldn't even consider playing at those resolutions, After playing all games at 640x480 4xFSAA (and sometimes 800x600 4xFSAA) on my Kyro 1 for a while I don't even like to use 1024x768 anymore, just to many jaggies and texture shimmering, and since I only have a 14" monitor my max res is 1024x768 so FSAA is essential for me, and we all know that FSAA really isn't a usable feature on the MX.
 
G

Guest

Guest
noko why you said this?

"Now configuring my CPU speed to a more typical system that a Kyro owner would have I did the following:"

one could go to a better cpu and a kyro saving some money, without losing performance... maybe earning some performance...
;)
good cpu's are getting cheap !!!

btw Dell is selling pentium IV at 1.5 ghz with tnt2 m64
and this is no joke, they go hand in hand so well !!!
LOL
 
G

Guest

Guest
Teasy: I saw that you mentioned that the Kyro2 probably wouldn't be a good choice for doing 3D graphics apps like Maya and such. I'm currently looking for an affordable option (100-200 USD) for my new rig as follows:

Asus CUV4X-D dual 370
2 Pentium 3 800eb
394 MB PC 133 RAM
Windows 2k Pro

Now, I was looking at the Kyro for many reasons...primarily because the game performance seemed tight, and I believe in the technology that is being pushed, not to mention affordability.

Are you implying that hardwired T&L is *essential* for performance in 3D apps? And if that is what you're saying, could you further explain why you feel the Kyro2 wouldn't be able to handle rendering complex geometry in preview mode in some of those 3D apps?

Thanks. BTW, a "carriage return" is simply hitting the ENTER key after a few sentences. :)
 
G

Guest

Guest
hey teasy and powervr2, I just realized that you guys are from anandtech. No wonder me wuz goten confuzed wear me wuz at.

T&L is essential in 3d apps, like I said in your post about it, any time you can take weight off the cpu is a good thing. The kyro2 just doesn't have to brute force of nvidia/ati chips. I think this could be a reason why T&L isn't with the kyro2; putting T&L work to a weak vid card wouldn't help.
 
G

Guest

Guest
okay cool...thanks. This has helped me make up my mind. Kind of a bummer too, but I guess I can wait til Kyro3 comes out and see how that baby performs.

thx for the feedback
 

Ncogneto

Distinguished
Dec 31, 2007
2,355
53
19,870
First I must say this has definatly been one of the best threads I have read in a long time...nice work guys. Now I have a question in which I will direct at Noko concerning this quote:

Now configuring my CPU speed to a more typical system that a Kyro owner would have I did the following:
-Changed my cpu multiplier to the minimum of 5x and since I have my FSB configured to 133mhz my cpu speed became a 665mhz T-Bird.

It would seem that here you are making an assumption that only the owners of a slower system might be interested in the kyroII. However, interestly enough could it be possible that the opposite may be true. For instance we have been shown in several reviews how the scaling of the gforce cards in relation to CPU speed that after somewhere around 850 mhz ( this done on an athlon t-bird....http://www4.tomshardware.com/graphic/01q1/010302/geforce-04.html) fail to offer any substantial increase in performance.
From what I have seen so far from the test done with the kyroII it seems to scale very nicely. Unfortuatly I have yet to see a test with this card in which it was tested on a top of the line system ( p4 1.5 or athlon 1.3 266fsb ddr memory).
What I am wondering is after the offerings from nvidia and ATI have reached there scaling peak do to there memory bandwith limitations it would appear ( and I very well could be wrong) that the KyroII may actually pull away from these other cards, perhaps for the more powerfull cpu being able to do its T&L calculations faster while still not being limited by memory bandwith issues?

A little bit of knowledge is a dangerous thing!
 
G

Guest

Guest
No I'm certainly not saying the Kyro II couldn't handle 3d apps but what I am saying is if 3d apps are the most important thing to you, or if there the only thing for you and you only very occasionally play games then HW T&L can help allot to speed things up even on a weak card like the MX because 3d apps unlike games aren't as fillrate or bandwidth limited even at high res, so a HW T&L unit can show its advantage, the Kyro II would certainly be more then adaquate with 3d apps, and if you also want the fastest and best looking games on a budget then the Kyro II is the card to get unless you don't mind playing at very low res like 640x480 were the MX might be slighly faster again because its not too bandwidth limited in most games at that res so its HW T&L unit helps allot, but I don't think there's many people who would even consider playing at 640x480 anymore, if you buy a new card now you want something that can play new games at 1024x768 and higher and for that the Kyro II beats the MX comfortably, if you want FSAA then there's no card anywhere near its price range that can challange the Kyro II, and if your a Serious Sam fan then only the Geforce 3 can keep up with the Kyro II in that game.
 
G

Guest

Guest
Well the Kyro II has shown that in high res/colour depth and with FSAA its consistently faster then both the Radeon DDR and the Geforce 2 GTS so it undoubtedly has more memory bandwidth then both of these cards, in Serious Sam (which is a new game) nothing released can beat the Kyro II and that includes the Geforce 2 Ultra, why?, because Serious Sam has quite a bit of overdraw and uses 4 texture layers, how many new games will be similar to Serious Sam?, some new games may even use the Serious Sam engine.
 

HolyGrenade

Distinguished
Feb 8, 2001
3,359
0
20,780
<b>I've only been gone three days and I come back to find 264 new messages in the Graphics section and most of them are in this thread! I've only quickly read the posts, so sorry if i've missed anything or am going over anything thats already been covered.</b>

the hardwired T&L in Geforce and Radeon cards will not be included in games for long.

with all new cards comming out with vertex shaders hardwired T&L will be fazed out for the simple reason that is is hardwired, at least CPU's are programable and as they get faster they can always be used for things like vertex shaders, thats something that can't be said for hardwired HW T&L.

It will be good for another year and a half at least. And after that, a good secondary system, The Programmable stuff will almost certainly be primary. Around this time I see games with no Software Geometry Support.



No games can really not work on a non HW T&L card, if anyone was stupid enough to make a game that didn't allow SW T&L then a simple driver trick like geometry assist could be added and the CPU could use the games HW T&L engine and it would probably be faster then using a SW T&L engine

Do you really think it would be stupid to drop software geometry? All the developers decided to drop software renderers when hardware 3D cards started to emerge. They decided to drop all the not so good 3D APIs and develop exclusively for the 3DFX GLIDE API. Do you really think they'll hesitate too much before dropping that.

From what Carmack's been saying, it sounds like Doom 3 will NOT support Software Geometry. Instead the CPU will be very busy with the complex Physics and AI.

<b>What is Geometry Assist?</b>


anyway to me it makes allot more sense to allow for sw T&L even if your game runs slowly because by the time these games come out people will have far faster CPU's, I'm sure in a year CPU's will be just as fast (if not faster) then the Geforce 2 HW T&L unit

It will be a good 5 yrs before any CPU is released capable of matching the raw processing power of the GeForce 2 GTS GPU.



also this is all forgetting that at any decent resolution HW T&L goes out of the window, look at any benchmark and you'll see that a HW T&L card will beat a SW T&L card at 640x480 or 800x600 when neither card is fillrate or mem bandwidth limited, but who plays at those resolution?, once the game goes to 1024x768 the benchmarks level out, go any higher and its the card with the highest fillrate and memory bandwidth (or most efficient fillrate and memory bandwidth) thats going to come out on top and in these cases the HW T&L unit can actually slow the card down by using up precious bandwidth, or use FSAA and again the HW T&L card won't nesassarily win in a HW T&L optimised game unless it has the highest or most efficient fillrate and mem bandwidth.

1024x768 is still excellent with T&L. it beats the Kyro2.



the Kyro II isn't even limited to 8 layers in a single pass, the only reason its 8 layers is because thats the maximum allowed by DX8, if more were allowed the Kyro II could do it?, fancy 10 texture layers in a game?, what about 12?, 16?, the Kyro II could do those all in a single pass, while the GTS is using 8 passes, obviously this is all theoretical stuff but its worth thinking about.

I think The Kyro is limited to 8 layers in a single pass. All the Kyro review sites keep saying upto 8 layers. The Quake 3 Engine supports upto 10 layer cascading in its multi texturing. The minimum is two.

Quake 3 Multitexturing:
=======================
(passes 1 - 4: accumulate bump map)
pass 5: diffuse lighting
pass 6: base texture (with specular component)
(pass 7: specular lighting)
(pass 8: emissive lighting)
(pass 9: volumetric/atmospheric effects)
(pass 10: screen flashes)

-The ones in parenthesis are optional.-

I may be wrong but I don't think the polygons have to be reloaded in the 'traditional' Multitexturing card. From Direct3D6 up to 8 texture operation units can be cascaded together to apply multiple textures to a common primitive in a single pass (multitexturing). The results of each stage carry over to the next one, and the result of the final stage is rasterized on the polygon. This process is called "texture blending cascade".



Well Doom 3 will have no less then 6 texture layers and upto 8

Appart from the Base skin Texture, Doom 3 is only certain to have only two others: Bump Mapping and Dot Products. I'm sure there will be more.

There will be no light maps (3 of the layers in the Q3 Engine) or Shadow maps. Instead, it will sport a fully dynamic lighting system with limited raytracing. Featuring plenty of calls to the T&L!

<font color=blue>"We've been doing hacks and tricks for years, but now we'll be able to do things we've been wanting to do for a long time," Carmack said. "For instance, every light has its own highlight and every surface casts a shadow, like in the real world. Everything can behave the same now and we can apply effects for every pixel." - John Carmack</font color=blue>



No I don't think Doom3 will be using pixel shaders actually or if it is it'll only be in a small way and won't be neccesary to play the game, Carmack said he was more impressed with the Vertex shaders in DX8 so I think thats what he'll be using


I'm not sure where you got the comments, but carmack wants a beast of a graphics card. This is what he said:

<font color=blue>"DX8 tries to pretend that pixel shaders live on hardware that is a lot
more general than the reality.

Nvidia's OpenGL extensions expose things much more the way they
actually are: the existing register combiners functionality extended to
eight stages with a couple tweaks, and the texture lookup engine is
configurable to interact between textures in a list of specific ways.</font color=blue>

BTW, Doom3 will be an openGL game.


<font color=blue>"I am spending a huge amount of graphics horsepower to allow the engine to be flexible in ways that game engines have never been before. It is a little scary to drop down from the ultra-high frame rates we are used to with Q3, but I firmly believe that the power of the new engine will enable a whole new level of game content.
I am hoping that the absolute top-of-the-line system available when the game ships will be capable of running it with all features enabled and anti-aliasing on at 60hz, but even the fastest cards of today are going to have to run at fairly low resolutions to get decent frame rates. Many will choose to drop a feature or two to get some speed back, but they still won't be able to get near 60hz.

Remember, the game won't ship for a long time yet, and today's cards will seem a bit quaint by then" - John Carmack</font color=blue>




and also obviously they can be done in software so the Kyro II or MX or any card can play a game with vertex shaders thats probably the other reason he's using them

As it currently stands, Doom 3 will not contain software Geometry code. But once the game is released, there may be some development for a dreamcast port with severe limitations. The non-T&L community can hope this will also bring a Software geometry pc release.


you can make similar pixel shader style effects with multi-texturing and also every card can use this, even though most cards will be forced into lots of passes, still having to do lots of passes and at least being able to play the game is better then not being able to play at all when pixel shaders are used.

You commend the Kyro for its innovative Tilers but revert to the preference of redundancies with the wasteful Multitexturing over Pixel Shaders?

Besides, the Doom 3 engine will be scalable, allowing the features that require programmable shaders to be disabled. The penalty is much of the eyecandy will also disspear.


I'm hearing from allot of people close to the industry that Nvidia would rather eat there own crap then make a tile based renderer, the reason for this seems to be that they don't want to use 3dfx's tech to bail themselves out of the memory bandwith hole they find themsevles in

That is just rubbish. Not every one aquires technology to pull a "Microsoft" i.e. just to bury the competition and forget about them. You Aquire technology to to use it. Whats better than having that technology in your products rather than in your competition.

They aquired 40 former SGI staff with the Gigapixel technology from 3DFX. They will be busy in implementing their designs into nVidia Technology. This means Tiling just like the Kyro II cards. I don't think nVidia will be furthering the development into napalm or rampage for release products. But they will be researched into, to resolve any potential issues, and to use any good concepts and ideas from those designs.


after QDR I don't think Nvidia have anywhere to go, its either a different design or bust, they can't keep bolting on faster and faster ram.

Ahh, There will always be faster ram, providing you spend enough towards it. Come on, Every one except intel must be grateful about them speeding up the ddr development.

<b>Sorry about the length of the post</b>


<font color=red>"My name is Ozymandias, King of Kings:
Look on my works, ye Mighty, and dispair!"</font color=red>
 

HolyGrenade

Distinguished
Feb 8, 2001
3,359
0
20,780
The Kyro II probably wont be that brilliant for Flight Simulators. They don't usually have too much overdraw, and they also need quite a bit of processing power to drive the physics of the game. But, I do have to admit most of the flight sims out there just use simplistic physics and flight models.



Will nVidia and ATI convert over to a Tile Base Rendener chip? I believe it depends on how successful the KyroII/III is
I think the nv30 will have tile based differed renderring, from the technology and the 40 former SGI staff aquired from 3DFX/GigaPixel. You have to think nVidia must have believed Gigapixel (before they were taken over by 3DFX) would be a threat. I think that is one of the main reasons the first of the GeForce 2 range was called the GTS (Giga Texel Shader). See, Gigapixel were also going to make just the chips and leave 3rd parties to make the boards. 3DFX reduced the threat by aquiring Gigapixel.



Looks like the KyroII cost will always be lower than the Radeon or GF line of cards so cheaper for us to buy, cheaper to manufacture but yet more profitable to sell
The Kyro II will probably be quite cheap, but any future cards will have to be expensive because any T&L implementation in them will be new to them and thus expensive in terms of R&D. Also, they'll need lots more transistors for the T&L engine which will also drive the cost up.


And Yep! Doom 3 will have 3D textures.


<font color=red>"My name is Ozymandias, King of Kings:
Look on my works, ye Mighty, and dispair!"</font color=red>
 

Ncogneto

Distinguished
Dec 31, 2007
2,355
53
19,870
With the current gforce cards being the bottleneck in systems at or around 850 mhz do to there memory bandwith limitations, how do they compare to the kyroII cards when used in systems in the 1.3 -1.7 gig range? It would appear that the kyro maight be capable of scaling much higher and making up even more of the difference in a high end system?

A little bit of knowledge is a dangerous thing!
 

Ncogneto

Distinguished
Dec 31, 2007
2,355
53
19,870
Another thing that has not yet been brought up, it would appear that the Kyro might be the perfect integrated video solution, has any thought been put into this area?

A little bit of knowledge is a dangerous thing!
 
G

Guest

Guest
You are absolutely correct in the fact that a mid to high-end cpu is needed for software T&L. This is already a well known fact. But your assumptions that since the board is a low-end board the people who will be using the board must also have a low-end machine. Personally I would not recommend jumping on the GeForce 3 bandwagon just yet... the board is too expensive and its DX8 functions are not needed.

So, what would I do.... well get the best low-end board out on the market knowing that when DX8 titles finally hit the market other vendors will have their DX8 boards available...with lower prices. Secondly, I would buy the best low-end board I could, saving some cash to do a cpu/mobo upgrade. So, you can get a low-end board, save some bucks knowing DX8 titles are still a ways off and use that money to upgrade my system in other areas. Remember that you system is only as fast as your slowest bottleneck and spending some cash to upgrade multiple pieces of your system is better than spending big bucks on buying a product that is the best in its class.
 
G

Guest

Guest
"The Kyro II will probably be quite cheap, but any future cards will have to be expensive because any T&L implementation in them will be new to them and thus expensive in terms of R&D. Also, they'll need lots more transistors for the T&L engine which will also drive the cost up."

Why do you say this? PowerVR have been using a T&L unit on the Naomi 2 board for a while now. Sega seems to love the board and this board is based on pre-kyro technology. STM have stated on numerous occasions that a T&L solution is available for the PC but have decided not to implement it yet.

WHY??

The reason why ImgTec and STM decided not to include T&L was in fact to keep cost down. The board is very cheap to produce, having an extremely low transistor count. Add T&L and the transistor count increases increasing costs. Now, what are OEM companies interested in....MONEY. That is basically it and since the KYRO II is a cheap card to produce, OEM companies will see a greater profit return with this product over say an NVIDIA product (which is what Hercules has basically stated when they mentioned they will not be producing a MX 100/200 board). So keeping the price of the board down was paramount to STM at this time.

Now, by the time PowerVR 4 (NP2) is released the parts needed to create it would have also dropped in price. What will they need.... just up the transistor count, add T&L and DDR ram (most likely the cheap stuff), increase to a 4-pipeline solution and up the core speed (rumors have been as high as 300MHz for NP2). Now, this is basically a GeForce GTS board. By the time the board is released it should also be very cheap to produce. But it will definitely produce a lot more power.