On optimizations...
As a practical mater, no console orientated optimizations will be seen on PC. This has absolutely nothing to do with the hardware and everything to do with the nature of the OS's involved. Console's use whats known as a Real Time Operating System (RTOS) while PC's don't. RTOS's are special because the system knows exactly how long each task will take prior to execution and has extremely tight timing. Task switching is handled on a regular predictable basis and every program running will know exactly how much time it has on any given processor. Because the program's developer could know this, they can program their code to be extremely tight, wasting very little CPU resources and having very few systemic bottlenecks or stalls. On a non-RTOS system, the programmers would have zero guarantees on how much time their program would have on the CPU nor could their program be capable of predicting how long it has before the OS's task scheduler preempts them. In a RTOS, things like AV and background services are timed such that foreground programs will always know the frequency of task swapping and how much time it has on the CPU. The downside is a RTOS isn't as flexible to a sudden change of user demands (user originated task switching). The predictability makes them ideal for application driven devices, things like sensors, control systems, media players or other non-multi tasking devices. This is also why modern consoles have "dedicated" CPU resources for the OS and background services, it ensures the application programmers will have a known amount of resources at their disposal.
So no matter what hardware is used, the fact that the OS's are radically different then what you would use on PC is going to prevent the majority of the performance optimizations. The only ones to cross over would be the multi-processing ones, so we will definitely see an increase in the number of CPU resources utilized.
As for the perceived "weakness" of the console HW, that is laughable. Current consoles have a ridiculous amount of processing resources at their disposal, much more then the previous generation. Multi-processing isn't anything new and is particularly adapted to console RTOS environments. Most of the discussed hold backs to multicore processing on games simply don't apply to consoles because you always, without question, know exactly what the system will be doing at any given time, it's 100% predictable. Consoles have always used cheap components, price limits are what is restricting them to heavily. It's entirely feasible to build a console around a $300 USD GPU and get stunning graphics, but it would be an expensive console that nobody would buy. So instead they use low prices components, stuff we would expect inside a $400 USD or less build, hell Sony even sprung for GDDR5 which is very expensive compared to DDR3. Then there is the power budget to deal with, consoles don't have anywhere near the power inputs nor the thermal dissipation to deal with higher end components. You guys talk about i5 level power, but an i5 CPU is way past a console's capabilities, same with GPU chips.
Simply put, your not going to see 1080p gaming on a console without dramatically reducing the number of objects and textures rendered on the screen. The resolution increase alone from 720p to 1080p would require 225% more power. Further increasing the objects, textures, special effects and complexity would increase the value even higher. So it's not that the console HW isn't keeping up with comparable non-console HW, it's that expectations are increasing at a rate faster then the industry can meet. All because a specific combination of three letters was used as the vender name.
And gamer, no way in hell can the 8 core Jaguar be harder to program for then the abomination that was IBM's cell, and yet the PS3 was a success.
As a practical mater, no console orientated optimizations will be seen on PC. This has absolutely nothing to do with the hardware and everything to do with the nature of the OS's involved. Console's use whats known as a Real Time Operating System (RTOS) while PC's don't. RTOS's are special because the system knows exactly how long each task will take prior to execution and has extremely tight timing. Task switching is handled on a regular predictable basis and every program running will know exactly how much time it has on any given processor. Because the program's developer could know this, they can program their code to be extremely tight, wasting very little CPU resources and having very few systemic bottlenecks or stalls. On a non-RTOS system, the programmers would have zero guarantees on how much time their program would have on the CPU nor could their program be capable of predicting how long it has before the OS's task scheduler preempts them. In a RTOS, things like AV and background services are timed such that foreground programs will always know the frequency of task swapping and how much time it has on the CPU. The downside is a RTOS isn't as flexible to a sudden change of user demands (user originated task switching). The predictability makes them ideal for application driven devices, things like sensors, control systems, media players or other non-multi tasking devices. This is also why modern consoles have "dedicated" CPU resources for the OS and background services, it ensures the application programmers will have a known amount of resources at their disposal.
So no matter what hardware is used, the fact that the OS's are radically different then what you would use on PC is going to prevent the majority of the performance optimizations. The only ones to cross over would be the multi-processing ones, so we will definitely see an increase in the number of CPU resources utilized.
As for the perceived "weakness" of the console HW, that is laughable. Current consoles have a ridiculous amount of processing resources at their disposal, much more then the previous generation. Multi-processing isn't anything new and is particularly adapted to console RTOS environments. Most of the discussed hold backs to multicore processing on games simply don't apply to consoles because you always, without question, know exactly what the system will be doing at any given time, it's 100% predictable. Consoles have always used cheap components, price limits are what is restricting them to heavily. It's entirely feasible to build a console around a $300 USD GPU and get stunning graphics, but it would be an expensive console that nobody would buy. So instead they use low prices components, stuff we would expect inside a $400 USD or less build, hell Sony even sprung for GDDR5 which is very expensive compared to DDR3. Then there is the power budget to deal with, consoles don't have anywhere near the power inputs nor the thermal dissipation to deal with higher end components. You guys talk about i5 level power, but an i5 CPU is way past a console's capabilities, same with GPU chips.
Simply put, your not going to see 1080p gaming on a console without dramatically reducing the number of objects and textures rendered on the screen. The resolution increase alone from 720p to 1080p would require 225% more power. Further increasing the objects, textures, special effects and complexity would increase the value even higher. So it's not that the console HW isn't keeping up with comparable non-console HW, it's that expectations are increasing at a rate faster then the industry can meet. All because a specific combination of three letters was used as the vender name.
And gamer, no way in hell can the 8 core Jaguar be harder to program for then the abomination that was IBM's cell, and yet the PS3 was a success.