Kyro 2 the killer of nvidia ???

Page 15 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
G

Guest

Guest
of course it will support it!
every single feature of directx7 is implemented in directx8
the problem is that pixel shaders etc... is not the same as T&L ...

maybe I didn't understand you ...
please noko explain that to us..
<P ID="edit"><FONT SIZE=-1><EM>Edited by powervr2 on 04/21/01 07:37 PM.</EM></FONT></P>
 

rcf84

Splendid
Dec 31, 2007
3,694
0
22,780
Well the radeon2 will have tile based rendering and Pixel shaders w/ 300mhz core and most likely 300mhz ddr-sdram. Sorry nvidia your days are numbered.

Leader of the Anti VIA-Nvidia Army
 
G

Guest

Guest
"The other thing we will see on these cards is improved HyperZ technology, with the main task being to save a waste of bandwidth from the processing chip to memory.

ATI has very positive thinking about tile-based rendering as well and its way of thinking here is similar - that is to improve performance radically.
"
I have taken this from your post on another topic...

very positive thinking about...its way of thinking here is similar"
similar is not identical...
and that ddr and the high transistors count will not be good ($$$$$$$)
but to be better than nvidia is not that difficult let's hope !!!

I have a question to you :
why is that agressivness against nvidia ?
ati is not the saint of video cards either !
 

rcf84

Splendid
Dec 31, 2007
3,694
0
22,780
Well i only had ATi and s3 video cards.

1) Nvidia almost killed s3. Savage2000 was released too early cuz of Geforce.

2) They Killed 3dfx.

3) they pushed around Powercolor. No powercolor Kyro2 :*( only crappy image quality nvidia cards. Pushing around the little guy make me sick.

4) ATi is going to distory nvidia with radeon2. OEM's will love this card.

Well the radeon will have more transistors then the Geforce3 and still use less power then the geforce3.

Leader of the Anti VIA-Nvidia Army
 
G

Guest

Guest
Thanks allot for the link, I don't know what DXTC problem there talking about though, I've never had a TC problem when running a game in DX8 with my Kyro, but I'll try the new version and see what happens, BTW DX7 HW T&L units can't perform DX8 specific HW T&L, it can use DX7 HW T&L in DX8 because DX8 is backwardly compatible, but a DX7 HW T&L card like the MX cannot support vertex shaders in hardware.

rcf84, no Radeon 2 will not have tile based rendering, what they will have is a more refind version of HpyerZ, I've already explained roughly what the HSR part of HyperZ does in a post on this thread and its nowhere near tile based rendering in terms of efficiency, lets hope their refind method is more effective in the HSR department.
 

rcf84

Splendid
Dec 31, 2007
3,694
0
22,780
Well teasy read this:

<A HREF="http://www.theinquirer.net/21040103.htm" target="_new">http://www.theinquirer.net/21040103.htm</A>

Leader of the Anti VIA-Nvidia Army
 
G

Guest

Guest
I've read that already, but it doesn't actually say Radeon 2 will use tile based rendering, it just says it'll use something similar in the way that its trying to save bandwidth, but its not tile based rendering, I suppose in a way you could call HyperZ's hierarchical z-buffering tile based rendering because it does use tiles but thats where the similarities between it and real tile based rendering end (and also other parts of HyperZ are nothing like tile based rendering in any way), what their improved method will be I don't know exactly but it won't be real tile based rendering like the Kyro and Kyro II has, its just an improved HyperZ which is just a few features that can be used on a traditional architecture to improve bandwidth efficiency, in the end the Radeon 2 is still a traditional renderer AFAIK.
 

rcf84

Splendid
Dec 31, 2007
3,694
0
22,780
Anyway the Radeon2 is a scary card. I would not to see nvidia rush the nv30 and mess up. Wait i would want to see that.

Leader of the Anti VIA-Nvidia Army
 

noko

Distinguished
Jan 22, 2001
2,414
1
19,785
...DX7 HW T&L units can't perform DX8 specific HW T&L, ....
Not exactly but the DX7 HW T&L can be used with the cpu doing the vertex shading and the video card doing the T&L. That is what the Max Payne Demo/test does on 3dMark2001. The upcoming game will do the same. MadOnion reports that this technique is significantly faster than the cpu doing all the skinning and T&L. Also please note the previous benchmarks that I did which shows that this is true. So a video card that has a DX7 T&L engine can still work in DX8 programs only vertex shaders have to be performed by the cpu while the T&L engine does the viewpoint translation and lighting. C :cool: :cool: L
 

noko

Distinguished
Jan 22, 2001
2,414
1
19,785
I installed DX8.1 on my W2k system and it seems to work just fine with a small increase in 3dMark2001 benchmark of around 50 points. Havn't tried it in WinMe yet.
 

noko

Distinguished
Jan 22, 2001
2,414
1
19,785
Only thing we can do is <b>speculate</b> on what ATI is going to improve. Here are some possibilities which I am sure other people can add to or just plain disprove. One thing is sure new technology is around the corner.

1. Partial on chip frame buffer (using onboard static ram in the Radeon2 chip) where a tile or section of the frame is worked then sent to the DDR ram frame buffer in sections. Sounds like a tile chip but still it isn't, just a buffer. This allows the Radeon2 chip to work at a much greater speed then current DDR ram would allow plus decrease bandwidth requirements for the onboard ram. Meaning cheaper DDR ram can be used resulting in a lower cost for a much more powerful board.

2. Improved hierarchialZ where a quick render of all current frame object vertex depth points (Z) are done and stored prior to the pixel pipelines. This would form a more advance hierarchialZ resulting in zero overdraw for the pixel pipeline, virtually eliminating the Z buffer. Basically render scene quickly (1 frame), extrapolate depth points, then texturize it, move on to the next frame. <i>(Currently the hierarchialZ is built as the frame is rendered, the objects that are rendered first have there depth vertex points writen to hierarchiaZ. Subsequent object vertexes if behind the previous value are ignored but if in front of what is in hierarchialZ then a new depth value is written over in hierarchialZ which subsequently causes overdraw in the frame buffer down the line. Since once again the pixel has to be updated in the frame buffer.)</i>

3. Use a quad pumped DDR ram configuration.

4. Copy nVidia Lightspeed Memory Architecture.

5. Onboard static ram where additional programs can be stored for the GPU.

Add your own theory, rumor has it the Radeon2 will be 4-5times the speed of the current Radeon which if true would be a revolutionary speed increase.<P ID="edit"><FONT SIZE=-1><EM>Edited by noko on 04/22/01 00:51 AM.</EM></FONT></P>
 
G

Guest

Guest
Yeah a DX7 HW T&L unit can still do normal DX7 HW T&L but as you say it doesn't support vertex shaders so that will have to be done by the CPU on all cards, also AFAIK the Max Payne demo in 3dMark2001 doesn't use vertex shaders, Max Payne the game will though.
 
G

Guest

Guest
Yeah the Radeon 2 should be very nice, I hope it gives Nvidia a run for their money, we'll have 2 killer cards against Nvidia's latest card in the next 4-5 months from ATI and IMGTEC, lets see what Nvidia's made of when PowerVR 4 and Radeon 2 is released (obviously PowerVR 4 will be a few months after Radeon 2 though).
 

noko

Distinguished
Jan 22, 2001
2,414
1
19,785
This is what MadOnion says about Game 3 or Max Payne Demo:

Game Test 3 - Lobby
A man walks into the lobby ..........
<b><font color=blue>All the characters use vertex shader skinning</b></font color=blue> except if your system has a DirectX7 generation 3D accelerator, capable of hardware transformation and lighting. Then skinning is done using a custom skinning technique, which does skinning on the CPU, but transforms and illuminates the skinned vertices in the graphics hardware. This technique is more efficient for such cards because a vertex shader implementation would transform and light the vertices using the CPU, which would most likely be slower. The empty shells and discarded guns are controlled by Ipion real-time physics by Havok.
So yes the 3dMark2001 Max Payne demo does use vertex shading. Now a substitute method is used if you have a DX7 T&L engine and another if you have no T&L engine. I hope this clarifies things.

<P ID="edit"><FONT SIZE=-1><EM>Edited by noko on 04/22/01 00:54 AM.</EM></FONT></P>
 
G

Guest

Guest
Yeah I found that info a few mins ago and was just about to correct my post, the demo does use vertex shading as you say unless you have a dx7 HW T&L card in which case a substitute is used, if you have no HW T&L it uses vertex shaders in software, how many games will use a substitute method for DX7 HW T&L cards though, we'll see anyway, when's the game comming out BTW?

EDIT: just thought I'd include this:

"NOTE: The scene is not from Remedy's game Max Payne, and should not be used to evaluate how Max Payne will look or play."

As I've said there's no way Max Payne will be released and get 30fps at 640x480 on a Radeon DDR, not if they want to sell games, I'm betting the game will run closer to the speed of the low detail 3Dmark2001 test then the high detail test (much closer), after all look at 3Dmark2000, there still aren't any games that run as badly as the high detail helicopter test does on my PC (not that its really that bad but I haven't seen any game my PC couldn't handle at full detail and 1024x768x32 resolution allot faster then that test), the madonion people make there test specifically to be very challenging to the PC's hardware, games developers make them to look and play good not to be challenging to hardware so IMO Max Payne will not be anywhere near as slow as the high detail lobby test.<P ID="edit"><FONT SIZE=-1><EM>Edited by Teasy on 04/22/01 01:10 AM.</EM></FONT></P>
 
G

Guest

Guest
I found something very interesting about the lobby test:

"The high detail scene has the following additions compared to the low detail scene:

Everything is reflected in the marble floor.

All characters have DYNAMIC SHADOWS.

Some shots that hit the walls shatter a part of the wall surface material.

The dynamic shadows are done by RENDERING INTO TEXTURES. If rendering to textures is not supported by hardware, shadows are rendered with the CPU"

So here we see why the Kyro II doesn't like the high detail test so much, the DX8 rendering into textures bug means it wasting the systems CPU power to do the shadows when the Kyro II can do them in hardware.

Also another interesting thing was this:

"The reflection in the floor is done by rendering everything upside down"

So thats twice the overdraw for traditionals, we can see that the high detail test isn't so much slower then the low detail test because of more poly's but because of dynamic shadows (which shouldn't take away much performance if the card can render into a texture) and mostly having to render the whole game twice (including all the overdraw twice too), once for the normal scene and once for the reflection in the flaw.
 

noko

Distinguished
Jan 22, 2001
2,414
1
19,785
Madonion took the Max Payne engine and created the whole lobby scene and characters. So that scene is not to be expected in the actual game. Still the capability of that engine is present in Game 3 including the effects. Obviously the AI portion of the game engine woulded be needed in the demo as in the real game so I believe you are right in saying it will be toned down. Not much I will add, check out PC Gamer May issue for some awesome visuals.

<P ID="edit"><FONT SIZE=-1><EM>Edited by noko on 04/22/01 02:02 AM.</EM></FONT></P>
 

noko

Distinguished
Jan 22, 2001
2,414
1
19,785
There are more triangles in the high detail tests. (On the average twice as many.)

Low detail statistics:

<font color=purple>Rendered triangles per frame (min/<font color=blue><b>avg</b></font color=blue>/max): 16681/<font color=blue><b>21746</font color=blue></b>/39890</font color=purple>
Rendered textures per frame with 16 bit textures (min/avg/max): 2.8/4.1/4.7 MB
Rendered textures per frame with 32 bit textures (min/avg/max): 5.7/8.2/9.4 MB
Rendered textures per frame with texture compression (min/avg/max): 5.0/7.2/8.4 MB
High detail statistics:

<font color=purple>Rendered triangles per frame (min/<font color=blue><b>avg</font color=blue></b>/max): 23304/<font color=blue><b>41729</font color=blue></b>/93845 </font color=purple>
Rendered textures per frame with 16 bit textures (min/avg/max): 4.8/6.0/7.1 MB
Rendered textures per frame with 32 bit textures (min/avg/max): 7.8/10.3/12.2 MB
Rendered textures per frame with texture compression (min/avg/max): 7.0/9.3/10.9 MB
Plus the Radeon supports shawdows in hardware by using priority buffers.

<P ID="edit"><FONT SIZE=-1><EM>Edited by noko on 04/22/01 02:01 AM.</EM></FONT></P>
 

noko

Distinguished
Jan 22, 2001
2,414
1
19,785
Have you tried out the new DX8.1 to see if it corrected the texture rendering problems with the kyro2. I wonder if the texture compression DXTC was causing that conflict between Kyro2 and DX8 in allowing the Kyro chip to write to the texture.
 
G

Guest

Guest
Yeah there are more triangles obviously since is draws the whole scene twice (hence twice the poly's) but that wasn't what I was saying, I was saying there aren't many extra poly's added to the actual individual scenes over the low detail scene, the reflective floor could be turned off without much effect at all to the game and then poly numbers are halfed (as well as fillrate and memory bandwidth) and the game will run at twice the speed.

I checked and every test uses rendering into textures in high detail, I also checked a few more games I play like Giants and again when I turn on shadows I loose 10fps, so again Giants is using rendering into textures too and because I have DX8 installed its forcing the CPU to do the shadows rather then just letting the Kyro do them.
 

noko

Distinguished
Jan 22, 2001
2,414
1
19,785
Thats a good observation dealing with Max Payne and I believe is correct. Did DX8.1 solve the problem? I've havn't tried it yet in WinMe but for W2k it seems to work a little bit better. So what is your setup with the Kyro2 as in cpu etc.
 
G

Guest

Guest
I've checked and the problem has not been fixed, I'm not sure what the DXTC bug that they fixed was but it wasn't to do with rendering into textures, a simple way to check is to try the demo in 3dmark2001, this demo doesn't allow for rendering into textures in software so instead of just letting your CPU do it if your graphics card can't it says "this demo requires a graphics card which can render into textures" and won't start, I just tried it after installing DX8.1 beta and it still says the same thing so it deffinately hasn't been fixed.

Right now I still have my Kyro 1 in my system:

KT7A mobo
Duron 650@890mhz (137mhz FSB)
256mb cas 2 crucial ram.
Videologic Vivid! (Kyro 1)<P ID="edit"><FONT SIZE=-1><EM>Edited by Teasy on 04/22/01 02:13 AM.</EM></FONT></P>
 

noko

Distinguished
Jan 22, 2001
2,414
1
19,785
Thats not good because who knows how long Microsoft will take in correcting this problem which wasn't in DX7 for the Kyro but now it appears in DX8. Hopefully a work around is found vice waiting for Microsoft.
 
G

Guest

Guest
Nobody but Microsoft can fix this problem, basically DX8 was written to (wrongly) think the Kyro and Kyro II does not feature rendering into textures, I just hope IMGTEC can put pressure on them to hurry the fix up, and I hope Nvidia don't put pressure on them to slow the fix down, and no I'm not joking BTW, I really wouldn't put it past Nvidia to use there new found friendship with MS to help them damage the Kyro II, I don't think its likely that MS would slow it down for Nvidia but I bet Nvidia would try to persuade them, after all they've played every other dirty underhanded trick in the book to try to hinder Kyro II.