[citation][nom]pcfan86[/nom]You're absolutely right. There are insurmountable limits that will:1. Prevent a 5B transistor mainstream chip from ever being developed. Some law of physics or something. No, wait, it's legal limitations. Or Intel just not wanting to. But somehow, it just won't happen.2. Prevent a high-end GPU + CPU combination because of memory bandwidth limits. This will never be overcome. Ever. No memory technology will ever allow it, not even DDR4 or Micron's Hybrid Memory Cube.3. Prevent a CPU+GPU from going above 150W. Not green, even though today's CPU and graphics cards together far exceed 200W. Graphics cards with a GPU TDP of 250W must be able to dissipate 250W without the junction temperature going above critical, but that doesn't matter either, because CPU coolers will never be able to handle more than 150W. Ever.Now I understand. Apply today's limitations to tomorrow's technology and all will be good.[/citation]
Again, I don't think that you understand what I'm telling you. A 5B transistor CPU+GPU chip CAN be built, but it would NOT be practical nor reasonable. Even at 14nm, it would be large and would generate a lot of heat that is difficult to cool. Just look at modern air coolers for CPUs today. Some of them are huge. We might need even larger ones or some sort of more revolutionary changes to make this sort of chip practical to cool. GPUs and such have superior cooling area and arealso generally louder than CPU coolers handling the same amount of heat. It's a different form factor and that means different amounts of area and design.
DDR4 would not be enough within practicality. Look at it this way... It would take at least six, maybe eight very high frequency DDR4 modules to have enough bandwidth. Even then, the latency would be somewhat high and DDR4, although not as bad as DDR3, is still a system memory designed for CPUs. It is not optimized for graphics cards and performance could suffer somewhat, although I'm not sure of to what extent it would be brought down (probably not huge, but considerable).
DDR4 does not compare to GDDR5 in performance as a graphics memory. It simply doesn't. It is a huge improvement over DDR3, but it's not nearly as good as GDDR5.
A custom water loop system might be able to keep such a machine cool, but this would still not be simple. The memory would probably need cooling too and there's still the problem with placing the GDDR5 chips. Some would need to be on the underside of the board to fit, but there's little room for cooling them there, so that' might be out of the question.
The CPU/GPU chip would be incredibly complex. This stuff that you ask for is simply not practical.
Furthermore, you continue to mistake TDP for power consumption. They are not the same. A graphics card will rarely hit is TDP, especially with AMD's cards and Nvidia's Kepler-based cards. A cooler would hardly ever have to cool the full TDP of a card and if it did for an extended period, it might even overheat, depending on the card. Furthermore, cards such as the 7970 and other large cards have a lot of space for their HSF and fans to fit in. They can be over 10 or 11 inches long and several inches wide. They are also often designed with often more noise per watt in mind and with concepts such as vapor chambers and larger IHSs for better efficiency. Cooling a graphics card is not the same as cooling a CPU.
Furthermore, yes, Intel reduces power consumption of their consumer CPUs almost every generation. They do this for considerably good reasons. Graphics cards have also been mostly reducing in power consumed at a given performance bracket ever since they peaked a few years back, although not necessarily consistently. Electronics companies are trying to reduce the amount of power that we consume. Is that really a bad thing? Heck, even SSDs, hard drives, motherboards, and also even memory are coming down and down in power consumption these days. It isn't just processors.
I'm not applying some sort of then outdated point of view on this, I'm telling it to you straight. Maybe GDDR5 will be replaced by a DDR4 derivative and this system would be able to function with fewer chips, but the point would still remain in that it is simply not practical and nor is it how Intel even does business. Intel isn't a huge graphics player. They have some extremely low end IGP offerings and if you go back far enough, there might be a discrete card or two. Intel has since given up on the high-end graphics market. Maybe they will enter it some day, but they would need to fix many things before thing. I've probably said this before, but I'll say it anyway: Intel's drivers suck. They really suck. They're worse than how AMD/ATi did several years ago. They are only just beginning to get drivers written properly and even worse, Intel manages them poorly in other ways b letting OEMs customize them with crap versions that never get updated.
Sorry, but Intel isn't going to make a CPU+high end GPU setup on a single chip. It simply wouldn't make sense.