truegenius :
i will say
clock all of them at 1ghz on all cores/threads (to create a cpu bottleneck and to force maximum cpu usage)
and then check the cpu time thing in taskmanager so as to find out the total cpu usage
for example
if an 8 core cpu show 5.1 minutes of cpu time out of 8 minutes (1 minute x 8 core) after 1 minute of bench
then it shows that scaling is 5.1/8 = 5-6 cores
isn't it a good idea ?
and i think that 8150 and 8350 will show identical reading in this test (provided that cpu is bottleneck)
Wouldn't work, because when you have only ONE application doing a lot of work, and that application is the foreground application to boot, its going to be running on at least one core more or less 100% of the time. Even if you ran for 24 hours, you'd see TM reporting close to 23H59M worth of CPU time.
The only way to see for sure what is going on at the thread level is a low level utility like GPUView, which allows you to see both the CPU Queue and GPU Queue. That will allow you to determine how many threads are doing meaningful work (above 5% CPU load or so), and how many are actually running in parallel (and for how long that remains true). [I plan to install the SDK tonight/tomorrow, and will probably take a cursitory look at a few games I have installed. I'll make a separate thread when I do that].
kettu :
I think DX11 is a fair assumption. Hmm, rendering threads, good point. Maybe, I don't know. If it's some kind of trickery to promote multi-core CPUs by running code that would be better suited for GPUs.. That would be lame. Or perhaps their engine is so heavily taxing the GPUs that offloading some of that rendering work to the CPU increases the overall performance. Or could it be some form of ray casting code that runs better on a modern multicore CPU than a GPU? What ever they did atleast they're putting cores to work.
The "big" feature of DX11 is multithreaded rendering, which allows multiple threads to render to a single GPU. In theory, this allows you to get away from having one monster render thread, which would have to execute sequentially, which as you can imagine, kills the ability of the CPU to scale to any reasonable degree. Hence why more or less any pre-DX11 game (or any game that still has a DX9 path) doesn't scale well.
The issue is, the few times I had a chance to play with multithreaded rendering, due to how rendering engines were designed, there wasn't much you could make parallel, as the rendering code went through stages where each stage took the previous stages output as its input. Its possible Crytek managed to build a rendering engine that gets around this, which would allow significantly better scaling, but as of the last time I worked on a game engine, this wasn't the case. [If this IS what's happening, I'd be REALLY interested to see how they structured the program to pull that off.]
Then of course, theres possibility three: Really poor coding in a number of threads (EG: using a polling method, rather then messages, which kills the CPU). I doubt that, but really can't discount it either. [And I note, this would be detectable on GPUView.]