viridiancrystal :
Again, single player. Most people don't buy a multiplier based game to only play single player.
But its BENCHED as a SP game. Thats the point.
I don't know of very many games that use more than 1.5Gb of system RAM as is.
Not compiled as Large Address Aware. Even on Win64, Win32 apps that are NOT compiled as Large Address Aware are limited to 2GB Address Space usage. Throw in overhead for the Win32 API, code space, and the like, and there's your ~1.5GB of RAM usage. And that limits what you can do.
While devs CAN compile a native 64-bit executable, you don't want to have significantly different feature sets between 32 and 64-bit, so the lowest common denominator drives the game design. So you either have to stream in the map details as a player moves (which eats performance), or pre-load it all up front (limits what you can do).
Hence why Win32 has to die. You aren't going to see significant advances in games until it does.
I really criticize other devs and praise Valve for their coding work. HL2 came out in 2004, and it scales 4 cores almost perfectly. L4D2, Portal 2, and TF2 scale 6 pretty well.
And I'm REALLY interested to see how they pulled that off.
EDIT
--------------------------
noob2222 :
read what I said abot BF3 .. in single player yes, in multiplayer NO WAY. I have tested this with my 8120 cut down to 2 modules. ITS NOT EVEN REMOTELY CLOSE.
But no one benches BF3 MP. Thats the point. Benching BF3 SP for CPU performance is silly.
latency IS fps
frames per second
seconds per frame
SAME THING just inversed. Longer latency = less fps. 16.7 miliseconds per frame = 60 frames per second. 32 miliseconds on one frame = an instantaneous FPS of 30 for that one frame, IE minimum FPS is the longest time between one frame. 99th percentile latency = minimum fps.
Not even close.
One way to address that question is to rip a page from the world of server benchmarking. In that world, we often measure performance for systems processing lots of transactions. Oftentimes the absolute transaction rate is less important than delivering consistently low transaction latencies. For instance, here is an example where the cheaper Xeons average more requests per second, but the pricey "big iron" Xeons maintain lower response times under load. We can quantify that reality by looking at the 99th percentile response time, which sounds like a lot of big words but is a fundamentally simple concept: for each system, 99% of all requests were processed within X milliseconds. The lower that number is, the quicker the system is overall. (Ruling out that last 1% allows us to filter out any weird outliers.)
http://techreport.com/review/21516/inside-the-second-a-new-look-at-game-benchmarking/3
Look at this really simple example:
Same FPS between these two cards in the same benchmark, but one is clearly superior to the other due to more consistent latencies. In short, its a chart of how long, on average, it takes to create 99% of the frames. If the number is less then 16.7ms, that indicates a steady 60+ FPS is possible (given powerful enough H/W). If greater, that indicates frames are being lost.
Using my example: You see the AMD GPU has trouble creating one of the frames. It gets skipped over for one cycle, finishes, then begins work on the next frame. Hence why the next two latencies are below the normal average. FPS is identical though, latencies are not.