-Fran- :
Well, in most console ports they don't make parallelism in a dynamic way at all.
Because that entire mechanism has to be re-coded, from scratch. Instead of hard-coded thread management, you are moving to an OS managed thread management. Core loading becomes less of a concern [more powerful HW]. You have to worry about thread locking dynamics for the first time [doesn't exist with hardcoded thread management].
I've read that some games for XBox don't even use the 3 cores for hard threads, since uneven number of cores don't exist in PCs, hence, making it harder to be ported.
Nonsense.
First off, PC's don't CARE about how many "hard" threads you run. The OS manages what cores you use, not the user. [Trust me. I know people who tried managing core usage directly when the first duo cores came out. Which worked, until some other application did the same thing, and both used the 2nd core for their heavy duty workloads. Programmers make no assumptions about core usage. We spawn a thread, and leave it to the OS to manage and schedule.]
Secondly, the real issue, again, is the fact you won't typically have more then one or two really heavy duty threads able to run at a single point in time, due to various bottlenecks. Nevermind you don't want the application trashing at 100% workload for extended periods...
You're thinking about good programming practices, but there's a lot of code around that's fixed for a certain number of cores even on a PC env. I'd say there's just a tiny number of programs that are really threaded dynamically to use 100% of every CPU (as in cores, not arch).
Cheers!
Because most problems that scale beyond a few arbitrary cores are already being offloaded onto the GPU [CUDA, OpenCL, OpenMP, etc]. As for the rest, its not so much that the number of cores used is fixed [which is silly; thats what the OS scheduler is for], its more an issue of scalability.
Example: I have 100 AI objects that need to be processed. There are two approaches:
1: Create a single thread to manage everything, and loop through the AI's one at a time.
2: Create a single thread per AI object, and invoke all their update routines at the same time.
The 1st is the approach everyone takes, because of the overhead of having 100 threads running, needing to be scheduled, ensuring all the threads actually finish running in an arbitrary time period, and interaction between the AI's [which would lead to significant deadlock time in the 2nd example if the AI objects need to cross-reference eachother].
So 100 AI's are going to be processed in a single thread, pushing 1 core towards 100%. The 2nd example is simply too hard to manage properly, even if it is more scalable.