Kyro 2 the killer of nvidia ???

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Hmmm, well I am sure you know I am a big supporter of PowerVR technology. In fact I have been running a PowerVR support site for a number of years (url is part of my profile). So I may be a bit biased but I do not downplay any problems I see in the PowerVR design. In fact when the Neon250 came out I do not think I ever promoted that board to anyone as it had lots of problems. (hmmm...homepage did not show up in my profile although I did enter it when I signed up for this message... http://pvr.gamestats.com/start.shtml

With the KYRO though....most of the problems associated with tile-based renderers have been corrected. Game support is finally outstanding and speed is great. One of the main issues you keep talking about is the boards lack of T&L and how it will affect the KYRO II in the future. I agree that in the future the lack of T&L will become a factor. What you are neglecting to mention about the GeForce 2 line of boards and the Radean boards (the GeForce2 MX, GTS boards and Radeon budget boards) is their problem with available bandwidth. We keep talking about the future and yes T&L will become a major factor...but the question becomes when? But the future will also bring with it increased memory bandwidth requirements. In this respect the GeForce and Radeon boards will be hit harder than the KYRO II board. Look at Serious Sam, a game that does use T&L, but also includes high levels of overdraw. It has been shown that the KYRO II beats out almost all the GeForce 2 boards with this title. So I have to think how T&L will factor in when the GeForce 2 line of boards are already memory bandwidth limited in future games that will include increased complexity?

And I also believe that T&L will not become a solid factor in games for at least a year. nVidia's biggest selling boards at the moment are still the TNT2 Ultra's that are slapped into OEM machines. These machines make up the majority of PC's on the market. And TNT2's do not have hardware T&L....so I cannot see game developers making the push to T&L required games anytime soon.

<P ID="edit"><FONT SIZE=-1><EM>Edited by pvrrev on 04/11/01 09:22 AM.</EM></FONT></P>
 
I am using a KYRO (Vivid!) on a Celeron 600 128 SDRAM and Serious Sam flies....32-bit everything, large textures, quad textures etc...

Although I do agree with you that lower end PC's may not see the true power of KYRO II as compared with high end PC's. But I think that is a plus as well as it shows the board scales extremely well with CPU's....so the card will only get better when you want to upgrade your CPU...

In fact I can probably buy a KYRO II and a MOBO/Athlon combo for the same price as a GeForce II GTS Pro (I am in Canada and the GTW Pro's are selling retail here for 599.00 and the Ultra's are selling for 699.00).
 
<i>(I am in Canada and the GTW Pro's are selling retail here for 599.00 and the Ultra's are selling for 699.00).</i>

That's a joke, right? I can get them for far less here in the states. Close to 460 CAD for the Ultra.

=
<font color=green>Running Celery 400 since 1999.</font color=green>
 
No that is not a joke....I was in Windsor Ontario the other day and walked into a PC gaming store. Took a look at the GTS Pro and Ultra's they had and was utterly shocked when I saw the prices of the boards. 599.00 for the GTS Pro and 699.00 for the Ultra. Now I am sure you can find them cheaper somewhere but I am just stating what I have seen personally in a retail store (not online!)
 
here in Portugal is even worse...
I have to spend even more than 700 us $ to buy a geforce 2 gts...
:)
probably is cheaper to buy a geforce online from the U.S.
<P ID="edit"><FONT SIZE=-1><EM>Edited by powervr2 on 04/11/01 09:59 AM.</EM></FONT></P>
 
sorry about the double post...
<P ID="edit"><FONT SIZE=-1><EM>Edited by powervr2 on 04/11/01 09:57 AM.</EM></FONT></P>
 
What resolution do you run Serious sam at? Check how fast it will go on the same machine using a GeForce2 Pro. I think you will be surprised how fast it will be.

Don't say I neglected to mention the bandwidth limitations. It has been mentioned and acknowledged time and over again. You see for current games, where you claim the Kyro2 shines, this bandwidth limitation comes in at Extremely high resolutions. How, many people play games at these resolutions. Especially, on your celeron 600.

Bandwidth will become a limitation, and all the manufacturers have their own solutions. If you look at my previous posts you will see some of what the GeForce has to offer. ATI already have Hyper Z in their Radeon Range. nVidia have made a mistake not implementing HSR in the GeForce2 Range. No one is trying to hide that. Tom demonstrated that when reviewing cards from the GeForce 2 range. But, they are still getting extremely high framerates at decent resolutions, with high detail. Most people play games at 800x600 or 1024x768. Very few people actually have monitors large enough (19" or higher ) to justify higher resolutions. And any low cost monitors of that size will reach those resolutions at low refresh rates. So, There are a lot of obstacles from a users point of view in getting to those resolutions. And at, 1024x768, the GeForce cards are still the best.

I played Giants demo on my computer the other day and it looked excellent and ran fast. I will be getting Black and White and the Demo for Aquanox. see how it runs.


<i><b><font color=red>"2 is not equal to 3, not even for large values of 2"</font color=red></b></i>
 
I've only read the first 4 pages of this thread so if all this stuff has already been said then just ignore me:) but I thought I'd post anyway because a few people here seem to have quite a few misconceptions about allot of stuff, so here goes:

The radeon does NOT have hidden surface removal (appart from the HSR that every card has, as in a z-buffer that does HSR AFTER its already rendered everything), the Geforce 3 does NOT have anything like the HSR capabilities of the Kyro II, for the Geforce 3 to do even a quarter the HSR of the Kyro II is needs games to be specifically written in a certain way, as for the Radeon it has early z-clear annd z compression which does nothing more then save z-buffer bandwidth, it does NOT do HSR before pixels have been rendered and so it has to draw every pixel thats sent to it, the Kyro II only renderers what it seen nomatter what is actually sent to the card to be rendered.

Also one person here seems to think that T&L is something thats done in hardware, this it totally untrue, T&L is Transformation and Lighting which has always had to be done a long time before Nvidia came along, what the Radeon and Geforce cards have is hardware T&L, a normal system CPU does T&L, this is called software T&L.

Now onto the argument that the Kyro II will be dead when games start to use Dx8 hardware T&L, it won't be dead but it certainly won't be able to perform the T&L anywhere near as fast as the Geforce 3, however what some people don't seem to realise is only the Geforce 3 can use DX8 hardware T&L, someone here said that the Kyro II is stuck with DX7, well so is every card appart from the Geforce 3, once games move to use DX8 hardware T&L all those other Geforce cards and the Radeon will (just like the Kyro II) have to use the system CPU to emulate the DX8 hw T&L operations, those fixed hardware T&L units on all cards but the Geforce 3 will be useless, so the Geforce cards will loose even more performance then the Kyro II will.

I just read a few more posts so I just thought I'd add that this:

People seem to think that the only advantage the Kyro II has it tile based rendering and that thats its only good feature, what about the ability to send out 8 texture layers in a single pass to the frambuffer?, which means it never has to take more then 1 pass per pixel (meaning no colour loss, less frambuffer badnwidth wasted, less geometry has to be reloaded), what about internal true colour which gives the kyro II near 32bit image quality in 16bit?, what about incredibly efficient FSAA? (so much more efficient then the Geforce's FSAA method), what about EMBM, Dot3, the ability to force compression of large textures in both OpenGL and D3D in all games (even those that don't support texture compression) with no noticable image quality loss (hence no terrible looking sky in Q3), the ability to blend transparencies internally in 32bit colour for no loss in colour accuracy even when outputting at 16bit, there's so many great features

Also you don't need a big monitor to play games at high res (or close anyway), because of FSAA, yes on a Geforce or a Radeon FSAA is inefficient and slow and running 800x600 with 4xFSAA will be allot slower then running at 1600x1200 without FSAA and won't look quite as good, but for the Kyro II with its efficient FSAA 800x600 with 4xFSAA is allot faster then 1600x1200 and look almost as good, so with the Kyro II everyone can play with great image quality even with a 14" monitor.

OOPS I just realised I replied to the wrong person, most of what I said here was aimed at holygrenade<P ID="edit"><FONT SIZE=-1><EM>Edited by Teasy on 04/11/01 02:28 PM.</EM></FONT></P>
 
<i>Also one person here seems to think that T&L is something thats done in hardware, this it totally untrue, T&L is Transformation and Lighting which has always had to be done a long time before Nvidia came along, what the Radeon and Geforce cards have is hardware T&L, a normal system CPU does T&L, this is called software T&L.</i>

Why yes your right, T&L is something that is only done in hardware!

=
<font color=green>Running Celery 400 since 1999.</font color=green>
 
What do you mean?, T&L is Transformation and Lighting, it is not specific to either hardware of software, hence T&L is NOT something that is only done in hardware.
 
I saw the same misunderstandings as well (Radeon & HSR ?, yeah right, Trident & fast 3D :) )
Someone said that developers get the cards sooner, yeah, when your name is Carmack or Sweeney, you will get it sooner. But when your name is John Doe from EA Sports, you can kiss your presample card goodbye!

Some people are talking about Unreal II and how it will kick ass on a GF2. Unreal 2 will take advantage of all the new gadgets found in DX8. So vertex shading, per pixel shading, etc. Some of those functions work with a programmable GPU (no I'm not kidding). I never read that the GF2 GPU is programmable, correct me if I'm wrong. The T&L on those cards will not cope with the high poly count (3000~5000 per enemy, some even 25000) in Unreal 2. Of course, Kyro II will not cope with those amounts either.
Complexitity will go up with Unreal 2. Hmmm, more bandwidth anyone? Even the GF3 will have bandwidth problems. And Unreal 2 is coming out Q1 2002. By then there will be GF3 Ultra (or something like that) for almost the same price (or plus ;( ). So what I'm saying is that the time isn't here to buy a GF3. It's just to new. The new DX8 features are amazing but will not be used shortly (maybe for some demo's, but I'm playing games rather then watching those). By the time that Unreal 2 will be out Kyro III (and Radeon II and maybe a surprise from Bitboys ;( )will be out with a programmable GPU, DX8 features, etc. They just felt that they wanted a fully DX8 compliant card, not only shaders but no programmable GPU. That would be sh*t. So I'm positive that their next card will also be DX8 compliant (or they know DX9 :) )
So now they have a fully DX7 card (minus cubic mapping, but which game uses that). So what, it's price tag is really low for the performance of the card. I know I can play games with it until the DX8 games will arrive (next year). And I'm preferring image quality over image quantity (fps). So GF2 is out of the question, bad 2D quality, sh*t texture compression (sky in Q3, I always shiver when seeing that again) and Kyro II is cheaper and will cope with complexer scenes made in DX7. I can play B&W, Warcraft 3 (please come out tomorrow, I can't wait), Diablo II Exp. All with good speeds, excellent quality and no worries about no T&L. Cause B&W even runs on a Banshee (current card).
So saying that the Kyro II will be the killer for NVidia? No, but if they can manage to make a hit out of the Kyro II (and it will for the poor bastards among us (me too!)), people will have more faith in the Kyro III. ST has to prove that tile-based will work with games and what's better then to prove it with a cheap card? You can see the difference between Kyro I and II, just by increasing the clockspeed. Imagine the power when they use it with 4 pipelines (I&II have two) slower DDR (not the expensive ones that NVidia is using for their bandwidth problems) and a programmable GPU. Damn I want one, just by thinking of such monster :)
GF 3 is at it's end, Kyro is just at it's beginning (wow, did I wrote that?). NVidia will be forced to go play with Gigapixel technology or they will be just plain stupid and go with QDR (for their precious bandwidth). I hope the first and with a normal price tag for once.
I just can't help it, I love the smart rendering above brutal force. Now it's up to NVidia to go the smart way.

Just my 2 cents, I will watch all your replies :)
 
Oooo! Youve addressed me by name. Looks like I have to reply.

<i>"The radeon does NOT have hidden surface removal (appart from the HSR that every card has, as in a z-buffer that does HSR AFTER its already rendered everything)"</i>

ATI Radeon has Hyper Z, which takes place before the data is sent to the frame buffer.


<i>"Geforce 3 does NOT have anything like the HSR capabilities of the Kyro II, for the Geforce 3 to do even a quarter the HSR of the Kyro II is needs games to be specifically written in a certain way"</i>

Read my post (about two or three posts before this one). Yes, its true, the game has to be optimised to take full advantage of it. If the game code is written right, It will get a huge bandwidth boost.

<i>"as for the Radeon it has early z-clear annd z compression which does nothing more then save z-buffer bandwidth"</i>

What else is the purpose of HSR?


<i>"it does NOT do HSR before pixels have been rendered and so it has to draw every pixel thats sent to it"</i>

Actually, it is done before building the scene in framebuffer.


<i>"Also one person here (Oh no! he means me! Oh the horror!) seems to think that T&L is something thats done in hardware, this it totally untrue, T&L is Transformation and Lighting which has always had to be done a long time before Nvidia came along, what the Radeon and Geforce cards have is hardware T&L, a normal system CPU does T&L, this is called software T&L."</i>

Well actually the 'Transform' is an invented term for Geometry calculations. Thats what it was usually called before the GPU.


<i>"to use DX8 hardware T&L all those other Geforce cards and the Radeon will (just like the Kyro II) have to use the system CPU to emulate the DX8 hw T&L operations"</i>

What!!?? Where the hell do you get your info???


<i>"those fixed hardware T&L units on all cards but the Geforce 3 will be useless, so the Geforce cards will loose even more performance then the Kyro II will."</i>

LOL!!! thanx for the laugh!


The features you mentioned are all good but already known. Except the 16 bit part, who runs their games at 16bit?

<i>"on a Geforce or a Radeon FSAA is inefficient and slow and running 800x600 with 4xFSAA will be allot slower then running at 1600x1200 without FSAA"</i>

Hmm???


FSAA just blends the image better. Higher resolutions give you more detail.


<i>"so with the Kyro II everyone can play with great image quality even with a 14" monitor"</i>

14" monitor? No wonder you like the Kyro so much.



<i><b><font color=red>"2 is not equal to 3, not even for large values of 2"</font color=red></b></i>
 
Wow! the first non-hypocritical Kyro Supporter. I'm impressed.

You know, you do make sense. nVidia will continue on 3DFX's research in Gigapixel technology and improved T-Buffer, perhipheral blurring etc. Not all of those features are likely to make it into the next revision in the GeForce Series, but in the batch after that.

nVidia do think they have an iron curtain. They also think their customers have bottomless pockets. So, it is likely they will go with QDR RAM when it is available. Perhaps even push for its availability.

They'll only learn to manage the production costs the day Imagination tech learns to manage the production time! ;-)



<i><b><font color=red>"2 is not equal to 3, not even for large values of 2"</font color=red></b></i>
 
<<<<<<ATI Radeon has Hyper Z, which takes place before the data is sent to the frame buffer.>>>>>>>

Ok you explain to me how HyperZ does anything to deal with overdraw?, AFAIK HyperZ merely clears the z-buffer faster and also compresses the Z-buffer, it does not check which pixels are occluded before rendering, also HyperZ has no end of problems with artifacts and crashing games.

<<<<<< I wrote: "as for the Radeon it has early z-clear annd z compression which does nothing more then save z-buffer bandwidth"

holygrenade wrote:
What else is the purpose of HSR?>>>>>>

HSR (as in deferred texturing like the Kyro II does) is not just about saving z-buffer bandwidth, yes the Kyro II doesn't need a z-buffer so obviously it saves lots of bandwidth because of this but it also saves bandwidth by not having to draw pixels that are never going to be seen on screen, it also saves a massive amount of fillrate by doing this too.

<<<<<<Actually, it is done before building the scene in framebuffer.>>>>>>>

Explain to me how the Radeon removes hidden pixels before they are rendered, granted i haven't looked very closely at HyperZ but AFAIK its merely a bandwidth saving feature (saves maybe 30% bandwidth and thats it) and does not save fillrate

<<<<<What!!?? Where the hell do you get your info???>>>>>>>

Look at the aquanox (I think thats what its called) benchmark in the anandtech Geforce 3 review, it uses DX8 hw programable T&L and all card but the Geforce 3 are forced to use software to emulate it.

<<<<<<<LOL!!! thanx for the laugh!>>>>>>>

Revirting to child like behaviour isn't going to make you right, DX8 specific HW T&L is programable, the Geforce and Radeon have fixed function hardware T&L units based on DX7, yes games that are made for DX8 with normal DX7 hardware T&L will work on the Geforce and Radeon cards but games specifically designed for DX8 programable HW T&L will have to be emulated in software on the Radeon and all the Geforce line appart from the Geforce 3.

<<<<<<<<The features you mentioned are all good but already known. Except the 16 bit part, who runs their games at 16bit?>>>>>>>>

And why don't people play there games in 16bit?, because most of them have a Radeon or Geforce card that looks like crap in 16bit.

<<<<<Hmm???>>>>>>

If you'd actually explain what you mean by "Hmm???" then I can answer your question.

<<<<<<14" monitor? No wonder you like the Kyro so much.>>>>>>>

Actually I never said I had a 14" monitor, but yes as it happens if you want good image quality and a good speed with a good refresh rate on a 14" monitor then you'd always be better off buying a Kyro II, especially in its price range.
 
<font color=purple>
<i>The radeon does NOT have hidden surface removal............... it does NOT do HSR before pixels have been rendered and <b>so it has to draw every pixel thats sent to it</b>, the Kyro II :lol: only renderers what it seen nomatter what is actually sent to the card to be rendered.

Ok you explain to me how HyperZ does anything to deal with overdraw?, AFAIK HyperZ merely clears the z-buffer faster and also compresses the Z-buffer, it does not check which pixels are occluded before rendering, also HyperZ has no end of problems with artifacts and crashing games.
</font color=purple></i>

<b>Hierarchical Z info</b>, you left out this component of HyperZ:

<font color=blue><b>Hierarchical Z</b>

A major problem that all game developers have to face when designing 3D worlds is known as overdraw. To understand what overdraw is, consider a 3D scene where you are looking through a small window into a room beyond. Some of the walls and objects in the room will be visible through the window, and some will not. Most graphics processors have no way of knowing what parts of the scene will be visible and what parts will be covered until they begin the rendering process. They must then check the depth buffer for each pixel and determine whether to draw it or not. In this process, many pixels will be written to the frame buffer, then overwritten by new pixels from objects that are closer to the viewer. Overdraw is the term for this overwriting of pixels in the frame buffer. A measure of the amount of overdraw in a scene is called depth complexity, which represents the ratio of total pixels rendered to visible pixels. For example, if a scene has a depth complexity of 3, this means 3 times as many pixels were rendered as were actually visible on the screen. This also means that 3 times the fill rate would be needed to display the scene at a given frame rate as would be needed if the was no overdraw.

Overdraw is a major source of inefficiency in 3D games. Depending on the content of a scene, depth complexity can vary from 1 to as high as 10 or more, although values around 2 or 3 are most common. Hierarchical Z represents a new, more efficient way of dealing with overdraw on the graphics chip. <font color=red>It works by examining scene data before it is rendered, to determine which pixels will be visible and which will not. <b>Any pixels that will not be visible to the viewer are discarded and not rendered at all.</font color=red></b> :redface: This dramatically <font color=red><b>reduces overdraw and significantly boosts effective fill rate</font color=red></b> :frown: , with a corresponding improvement in performance.

<A HREF="http://www.ati.com/na/pages/technology/hardware/radeon/performance.html#3" target="_new">http://www.ati.com/na/pages/technology/hardware/radeon/performance.html#3</A>
</font color=blue>

😎 It happens to the best of us, hope you don't mind the correction. Plus the link above has the whole scoop on HyperZ on the Radeon. :smile:

<i>I just read your above post so I edited this one, please look over what HyperZ is before making your statements. HyperZ works virtually flawless buddy.</i><P ID="edit"><FONT SIZE=-1><EM>Edited by noko on 04/11/01 04:53 PM.</EM></FONT></P>
 
Warden, nice quotes but neither of the individuals stated when their software will be available so I don't know where you came up with the end of the year as being some kind of deadline.

Furthermore, I have T&L now in my Geforce256 SDR but I doubt that this card will be up to the demands of software mentioned.

I don't know about you but I don't buy video cards because they have the latest technology. I buy them when my current card doesn't adequately run the games that I want to run.

I bought myself some time by overclocking my Geforce card but I have reached the point where I am considering something new. Does this mean I will run out and buy a Geforce 3? No, I won't even buy a Geforce 2 ultra. Why not? You may ask. I don't need them. If I want to run most games at 1024x768, I can do that with a Geforce GTS. If I want them to run at 1280x1024 then I might want a PRO (but I would also have to upgrade my monitor which I probably won't do).

I don't believe that every game will require hardware T&L after this year but even if it was true there are plenty of current games that I haven't got around to buying/playing. I can afford to wait, in fact, I can't afford not to wait. (Pardon the double negative).
 
Just messin with ya. When we talk about Hardware T&L, it is a process that is carried out by the graphics card alone. As you already know, Software T&L is done by the computer's cpu but the 3rd and final stage of the T&L process is rendering (where x,y,z coordinates are connected and the triangles are then filled in to create the 3d scene) which is usually done with hardware in the graphics card. So why make the graphics card wait for the finished data from the cpu? The graphics card (onboard Geometry engine) would have the fatter and faster pipeline for the data. Heck! This is just like back in the days of AGP vs. PCI! But don't forget that the T&L process requires an extensive amount of floating point math. The primary purpose of a graphics card is to take as much load off the cpu as possible.

=
<font color=green>Running Celery 400 since 1999.</font color=green>
 
On the contrary I welcome the info you posted, however this feature isn't working in current games i'm pretty sure about that (I know the other features like early z clear work in lots of games as does z-compression), I'm pretty sure the game needs to be written in a cartain way to take advantage of Hierarchical Z and even then its no even 50% as efficient as Kyro II's HSR which gets rid of 100% of opaque overdraw in all games.

Thanks again for the info

On the point of how well HyperZ works It certainly can't be used in every game, some games just simply disagree with it or thats my experience anyway, I own a Radeon (just got it cheap for my reviews), when I got it I played with it for a week and while in lots of game HyperZ was fine in other game it had problems from small artifacts too crashes, my main point is as a feature to make a card more efficient deferred tile based rendering is incredibly efficient and doesn't cause artifacts or crashing in any games while HyperZ is nowhere near as efficient and isn't as compatible with all games, for evidence of how efficient it is against deferred tile based rendering just look at the specs of the Radeon DDR and the Kyro II:

Radeon DDR

185mhz (I believe thats about right?) chip with 2 pixel pipes and 3 TMU's on each pipe giving a pixel fillrate of 370mpixels/s and a texel fillrate of 1.1gtexels/s

The Radeon has DDR ram running at 185???, giving a mem bandwidth of 5.9gb/s

The Kyro II chip runs at 175mhz with 2 pixel pipes and 1 TMU per pipe giving a pixel fillrate of 350mpixels/s and a texel fillrate of 350mtexels/s

The Kyro II has SDR ram running at 175mhz giving a mem bandwdth of 2.8gb/s.

Look at those specs and you'd think the Radeon DDR would destroy the Kyro II, but then look at the Anand Kyro II review and infact the Kyro II destroys the Radeon DDR in some benches and simply beats it in others.<P ID="edit"><FONT SIZE=-1><EM>Edited by Teasy on 04/11/01 05:15 PM.</EM></FONT></P>
 
i think the only thing that needs to be done is that the polys be numbered from front to back when modelling the geometry (i don't think the actual code has to be reworked much). generally the poly's are set up this way so not much thought has to go into it, but sometimes for complicated objects it doesn't always work ou this way so for optimal performance the objects must be reworked, which probably does not happen all that often.
 
If Hierarchical Z is new information for you, how do you know that it isn't working in current games? What is your source for that conclusion? A great overdraw benchmark is VilliageMark, my Radeon with Hierarchical Z turned off is 28% slower than when it is turned on. Default benchmark 1024x768x16, 40FPS off 51FPS on. This indicates a very effective enhancement that works.

<P ID="edit"><FONT SIZE=-1><EM>Edited by noko on 04/11/01 05:16 PM.</EM></FONT></P>
 
Isn't village mark from the kyro folks?
I don't think my GeForce 256 will do too well in it.


<font color=red>"My name is Ozymandias, King of Kings:
Look on my works, ye Mighty, and dispair!"</font color=red>
 
I didn't say Hierarchical Z was new information to me I said I didn't know that much about HyperZ, I didn't know HyperZ included Hierarchical Z, I've seen many discusions on Beyond3d over Hierarchical Z and while its not of great interest to me (so I didn't read all the threads) I know that is doesn't just work with every game, the reason I assume its not working in current games is because if it was it should make the Radeon incredibly powerful, look at the Kyro 1 its raw specs are that of a TNT2 (and not even a ultra) and yet it beats a Geforce2 MX and Radeon SDR, thats because of full HSR (deferred tile based rendering), when you say you get a different score in Villagemark with Hierarchical Z disabled and enabled do you actually mean Hierarchical Z or do you mean HyperZ (of which Hierarchical Z is just one feature of HyperZ)?, if enabling Hierarchical Z itself (not HyperZ) gives you a performance increase in Villagemark then is obviously does a small amount of HSR, but not in all games and its certainly not full HSR like the Kyro 1 and II has, but again thanks for the info , I always like to learn new stuff and I didn't know HyperZ included Hierarchical Z, I'll pop my Radeon back in my PC soon and give a few games a try with Hierarchical Z on and off and see if I can see a difference in the majority of games.

BTW just as a reference the Kyro 1 gets 85fps in Villagemark, I don't even want to think about what the Kyro II would get:)