What graphics card is the Xbox360 equivalent to?

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Tessellation isnt just a buzzword its been reported that its going to be a requirement of DX 11 and since both camps seem able to do it, unlike when MS droped the virtualisation because Nvidia couldnt do it i see no reason for it to be far from the truth, Its true both camps have played with it before ATI first released it in the form of "TruForm" on the 8500 but it died a death because of the limited support(serious sam and unreal tournament 03 were a couple that did use it) not for the idea and technology but because Nvidia were using a differant form of it but i cant remember what that was called. But now it seems that there will be the one standard this time so the developers will be able to implement it without having to worry about compatability.
Mactronix
 

Oh, yes, I don't deny that it exists; I just consider that it's of incredibly limited utility for many games, particularly on consoles. From what I've seen myself, (for instance, I've run Morrowind using n-Patches) the increased T&L cost is comparable to what you'd get if you'd just used higher-poly models in the first place; it just appears to perform the increase with a smaller increase in memory usage than the higher-poly models would.

I'll grant that it does have a potential use for consoles: backwards-compatibility. Morrowind's models look tolerable on the original Xbox's 640x480, but at 1280x720, their problems become more apparent, as do the limitations of the Gourad shading model used. The application of tessellation would be one thing that would help make it look much better, on top of the 1280x720 resolution and x4 AA.


Which was perhaps the main reason I tended not to use it...


Well, mandated it, you mean. 😉

However, I can't quite say that this is the case; Tesselation seems to be not mentioned at all in anything I've read on DirectX 10.1.
 
Thats the one TGGA Quintic couldnt think of the name for love nor money earlier amd i was a bit rushed or i would have looked it up. As far as DX10.1 having tesselation support i cant remember reading it in the white paper, the way i read it it was part of the DX 10.1 future plans and i guess you could read that as being acctually 10.1/10.2 but was given to beleive that it was meant to mean 11. I may be wrong, you know a heck of a lot more than me.
Mactronix
 
Yeah I'll have to check my M$ stuff at home and see if it was codified, I know it was in the pre-reales info on what to 'expect from DX10.1', but like the memory virtualization in DX10, it may have been delay and pushed off until later.
 

That is a hardcore review. The more I read it, the more I realize that I know next to nothing about how graphics processors operate 🙁
 

I'd note that I explicitly mentioned the T&L cost, which actually remains the same for a simple reason: it takes one stream processor/T&L unit one clock cycle to handle each vertex, or four (3 vertices+1 normal) for each polygon.

However, I do note that there is an undeniable improvement in terms of MEMORY BANDWIDTH there; it's akin to, for instance, texture compression: a standard 6:1 compression will cut the bandwidth and VRAM used by texture sampling by 83.3%, however it will still place the same strain on the TMUs.

The reason why I disregard it, at least for PCs, is that unlike in consoles, where design constraints result in choices that yield less memory bandwidth, PC discrete video cards are not lacking for bandwidth nor memory. This was quite apparent when AMD cut the Radeon HD 3870 to a 256-bit memory interface from the 512-bit one seen in the HD 2900XT, but it manages to out-perform it across the board!


Just hang around and read lots more if you want to know more. That's how all of *us* got there, anyway. 😉
 

I'd probably have to be some sort of an engineering or math major to understand the low-level stuff that goes on inside of a GPU. The stuff that gets talked about in "normal" reviews and even most of the AnandTech reviews I can follow pretty well, but that insanity that TGGA posted went way over my head.
 

Well, I'd just say that it's worth noting that I don't have a major in either subject, and in fact, math classes frustrate me. 😛
 
To Homerdog
As nottheking said just stick with it if your interested in knowing more, Some of it seems really technical and can seem quite daunting when you see a bunch off words on the screen that may as well be in a foreign language but most of the guys that really know there stuff are more than happy to answer any questions you may have.
Personally i have learnt a lot during ongoing thread discusions with various people certainly more than you can learn from books or some online reviews, you cant beat real world experiance.
Mactronix
 
I do electronic engineering and thats why I always found r600 so interesting. The ideas behind it are very innovative to the point of a complete miscalculation of the software it was meant to run. Ati thought that all the new game engines were going to use massive amounts of shaders which the stream processors would spit it for fun. It turns out though that current software isn't as shader intensive as they thought so instead off lots of parallel stream processing nvidia's brute for way gives better performance. The biggest forsight is to the lack of a hardware AA resolve hence the big performance hit with AA.

Basically thats why the improvments have been quite big from just a driver level. R600/RV670 are much more tunable at a driver level.

Theoritically a game engine specifically designed around R600/RV670 (Ruby demo) would leave nvidia's cards standing, but that would mean removing most of your audience so it wouldn't make sense.

That article Homerdog is a good way to learn all the stuff that goes on. Also find the original G80 article.
 


hmmm the 2900xt is not outperformed by the 3870 quite across the board. There are a few games where the 2900 comes out even or slightly ahead. The 3870 still has something like 70% the bandwidth of the 2900 due to using gddr4, which is recently alot cheaper than previously. Plus the 3870 has been tweaked somewhat to utilise bandwidth better and it is clocked slightly higher than the 2900.
 
here is a word to the wise: build a computer to be faster then a console, then relax, cause most games are programmed with consoles in mind. your computer will be a great gaming rig for as long as the consoles stick around. so when is the new xbox and playstation coming out?

cause' i think the geforce 8800 or 9800 series are the best for games at this point in time, anything more is just candy.
 


hey i was the grape ape a long time ago in a galaxy far far away. anwyay, im still waiting for games to use dx10, and your tallkin dx11? i think its a marketing ploy.
 
GA's been my nickname since I was a kid in the era of the actual cartoon.

Anywhoo, the thread's about a year and a half old (lotsa necromancy about lately), and there are games that use DX10 (some more than most) and the reason we talk of DX10.1 and DX11 is because of the potential they bring. Sometimes it's used, sometimes it's not, people just hope that it's something they don't have to pick or chose for (like performance versus features in the R8500 / GF3 and GF6800 / X800 decisions).

As for the original discussion, the feature set of course are relevant to what a non-pc card relates to, especially when not built on a PC-based GPU [unlike the RSX which is like a GF7900GT with a few disabilities].

I have hope for DX11, but only time will tell if it'll have a smooth adoption than some previous iterations.

Just remember what you replied to was our view of things about 19 months ago, so not everything unfolded as we could've guessed or hoped..
 
The archetecture is completely different, and belive it or not, Microsoft give VERY little credit to ATi not even placing the ATi emblem on the chip 🙁

Just to book-end this thread, here again is some of the best information on the architecture of the X360's GPU codenamed R500/Xenos;

http://www.beyond3d.com/content/articles/4/

And a good thread on the technical differences between the R500/Xenos and RSX;

http://forum.beyond3d.com/showthread.php?t=21501

And thus ends the thread.
 
Status
Not open for further replies.