its down to how the processors will be optimised for running typical graphics applications. hard to say without actually looking at some of the computationally expensive algorithms used in computer graphics, and then looking at how they'd run on either parallel architecture. so i'll refrain from resorting to conjecture.
but at any rate having your unified shaders split across multiple cores will lower performance rather than increase it -- though whether the difference is immeasurably small or substantial enough to have an impact remains to be seen. if they go for multicore architectures a number of issues can arise in such a scenario, as the cores now how to communicate over the bus rather than directly with each other. and all your typical multiprocessor architectural considerations come into play. that can bring up typical concurrency issues including synchronisation, mutual exclusion, timing, prioritisng use of the bus, capability for disabling the bus, issues related to management of shared memory. multi cores may also require the use of shared memory + common memory or some such related scheme.
but using additional cores will often be the cheaper, easier choice.
NVIDIA's strategy when it comes to building GPUs seems to favour a single powerful monolithic design. while ATI seems to want to compete by using multiple weaker designs instead of monolithic designs. both techniques could be considered equivalent as far as performance goes. but i wouldn't expect there to be a big advantage in a multicore design, nor get overly excited by it.
the reason processors with multicores is an interesting proposition is because, by the very nature of their design, they don't allow for parallelism. so multicore architectures are a very welcome evolution. on the other hand, GPUs are designed for parallelism from the ground up and use "multiple processing elements" in the hundreds. i just dont see there being any advantage for adding multi cores other than a simpler way of scaling up performance without investing too heavily on R&D.
Edit:
@soundefx, as for games i'd expect the scaling for additional GPUs to be more closely related iwth how the hardware and driver interact with DirectX11 or OpenGL rather than directly with the games, unless that specific game is designed to use a "GPU-accelerated software renderer" (a fresh concept that has recently come up). As long as the driver is optimised to efficiently map the DX11 api on to the hardware, we will probably not have any problems with games... at least in the majority of the hardware. that's what i'd expect anyway.