Why did the comments have to turn into a 360 vs. PS3 flamefest, it seems? Hardly any of you know much of anything about hardware engineering, so you're mostly just going on hearsay. Worse yet, it seems none of you understand the concept of "engineering:" making use of limited resources to design a solution to fit a SPECIFIC need. I guess most of you have been spoiled by the (false) idea that you can have a "one-size fits all" piece of hardware, that you can have your cake and eat it, too.
The biggest thing is that the CPU has little to do with the graphics compared to the GPU. PC Enthusiasts here have known this for years, though I can understand if most of you console-only gamers don't know this. As it happens, both the 360 and PS3 actually have relatively close GPUs; the PS3 has a slightly cut-down G71, while the 360 has a custom ATI/AMD CPU that sorta-resembles a halfway gap between an X1k and HD 2k series card. The main differences here are as follows:
- The PS3 has more memory bandwidth; it has 22.4 GB/sec exclusive to the GPU, while the 360 has to share its GDDR3 bandwidth with the CPU. On the flipside, the 360's GPU gets direct access to all 512MB of memory, while the PS3 has to indirectly access the CPU's memory if it needs more than 256MB.
- The PS3's CPU cannot do AA+HDR. As a result, most games use HDR, but do not touch AA. (FF XIII is an exception, as it has no HDR, but uses AA instead)
- Because the PS3 can't DO both, it's also not spending power on both; as a result, it puts that toward making all its major games run at 720p, while on the 360 comparable games would run at 576p, 600p, or 640p.
- Some sports games run higher than 720p natively on BOTH consoles. However, for "top-shelf" major games, (be it GTA 4, MGS 4,
Halo 3/3ODST/Reach,
Uncharted,
Fallout New Vegas, etc.)
None run higher than 720p. Playing on a 1080i/p TV will result in the game being "stretched" to fit.
Overall, the PS3 and 360 are rather close for consoles; no two consoles have been this close in capability in quite a long while; perhaps the SNES and Sega Genesis were close, IF you ignored the SNES' huge advantage in audio.
The idea that something can "use a console to its max" is really a bit misleading: due to decades of engineering expertise being applied to the consoles, MOST top-shelf games "use every bit of a console's power." For the most part, if a game looks better, it's because they found a "better" way to re-balance where resources are spent: typically by cutting quality to parts you won't notice, to boost it where you will. The machine's doing the same work for about the same results, it's just that the results are arranged in a way you'll notice better.
[citation][nom]Flameout[/nom]i've seen games on xbox360 and they look the same as my ps3. what's so great about cell?[/citation]
Well, when's the last time you watched a Blu-ray movie on your Xbox 360? Most games don't even use most of the SPEs on the Cell. However, they ARE used when you're playing a Blu-Ray movie. The Cell in the PS3 was an application-specific chip: designed for the PS3 and nothing more. The PPE (the main core) is used 100% for games, with light usage of the SPEs. (7 sub-cores) However, for movies, the main core is overkill, and largely spun down, while the SPEs are used for more efficient provision of math power, that's needed for movies.
[citation][nom]joe gamer[/nom]Asymmetrical processing will never be able to compete with symmetrical processing. When you know that all of the CPU's cores are identical programming a multi-threaded app is not difficult, you simply balance the load equally. But an architecture with an asymmetrical processing load?[/citation]
Actually, we've been seeing assymetrical CPU architectures in use for years and years and years, and being programmed for pretty darn well. They're called video cards.
[citation][nom]joe gamer[/nom]If the Cell processor was really worth a damn don't you think that someone somewhere outside of the PS3 would be using it for something/anything?[/citation]
Funny you mention that... Since the same sort of design (though modified for native double-precision, of course) was used in
the world's most powerful supercomputer from 2008-2009.
The Cell design DOES have its definite uses; in a previous post around here I'd detailed over how it's actually peraps the most efficient design for supercomputing in terms of performance-per-watt: it manages to be a more math-heavy (and hence less instruction-heavy) design than conventional symetrical CPUs like x86 designs, while still maintaining close to their theoretical peak in real-world applications; a PowerXCell 8i manages to reach about 75-80% of its theoretical maximum in real-world tests, which is on a par with major x86 CPUs like Opteron and Xeon. This is FAR better than the 30-35% level gotten by GPUs. Hence, in spite of the lower THEORETICAL power of the Cell, it manages to be better suited for general-purpose computing applications than a GPGPU.