What card can give graphics at the same capability as an xbox 360?
What resolution is that Xbox 360 running at ?
- HDTV: 1280 x 720 = 921,600 pixels per frame if not interlaced
- TFT/CRT: 1280 x 1024 = 1,310,720 pixels per frame
The PC GPUs typically render a 4:3 screen (same width in pixels, but taller for less up/down looking around). They render these frames with +42% more pixels per frame too. (Or significantly more, at up to 3.2x times more pixels per frame than the 'HDTV' console, if the HDTV output is 720 interlaced that is
😛).
The GPU is advanced on the console, all the effects, etc it can do, true, but the work load it has to do for 'HDTV' video output is not that high.
Can comparisons really be made ?
Heck some people run their PCs at 2560 x 1920 = 5.333x the pixels per frame of a console at higher frame rates and refresh rates than the consoles will output at.
That is over 5 times the detail !
Until HDTV devices can output at similar resolutions typical, decent PC 3D video cards will always be more powerful.
I think the term is called 'advertising'.
When comparing a PC GPU to a Console GPU both systems need to run at the same resolution, FSAA, texture quality [including texture resolution] etc so they can be compared.
This would mean running a PC GPU at only 1280 x 720, which is likely to give the PC up to +42% boost in performance (vs 1280 x 1024) in some scenes. It is an equal comparison though, just people with PCs like to run at higher resolutions than consoles are offering.
I'd put the console GPU somewhere between the GeForce 7600 GT and 7900 GTX, maybe GeForce 7900 GT level.
How would a Radeon 1800 XL to 1900 XTX vs GeForce 7600 GT to 7900 GTX vs an Xbox 360 stack up, if all running at 1280 x 720 with same FSAA ?
The PCs would kick its ass at such a low resolution, I'll say that much.
PS: Decent PC video subsystems are approaching 128 GB/sec fast.
😛, Even my current 'low end' Radeon X800 XL has a 32 GB/sec video memory interface available to it. 8)
GPUs have had cache for ages btw, just like that eDRAM stuff, except it isn't advertised as some BS speed. They say [mem clk] x [bus width at memory] / 8 / 1024 = GB/sec of video memory. The companies (ATI and nVidia) do not like disclosing
all the details of their GPUs. Only some of the more basic ones. (eg: Enough to let people calculate Gpixel/sec fill-rate, etc).
ATI actually made their cache hit rate stats (for X1900 GPU) available to the public. In static, predictable, non-interactive (non-random input) tests, such as 3DMark the results where very good, but in games the rate was lower. Of course it performed damn well in games, but it could prefetch far more accurately for timedemos and 3D Mark, they knew this, and they knew it would boost sales of the GPU.
😛
Also suggest people have a read of: http://www.microsoft.com/xna/
1080i is about the equiv of 1280 x 720 btw, and using 3:2 deblocking is is rendered to a screen with 720 vertical lines. Surprised ? 8O, Well not really, it saves on space and bitrate (which more finite through air than cables) as even though it only updates every other line each frame, the deblocking means every line actually gets an update. Basic high-school mathematics, and some rather smart trickery to fool consumers.
1080p was actually designed for 864 pixels 'tall' playback on PCs. Using 5:4 deblocking, when rendered on a PC screen 864 pixels vertical / high, it looks more crisp than rendering 1080 pixels high with just standard filtering. The HD formats were designed this way, as it lets them look good regardless of the output device, even on older PC monitors, aswell as on large screen TV you as you sit further back than the PC monitor.
It was surprisingly simple, & smart, to do mathematically speaking, and eliminates the drawbacks of working with interlaced footage when on PCs, without having to convert from/to and back again, thus avoiding detail loss at each step when working in lossy formats, as it isn't 'required' anymore.