[citation][nom]Pherule[/nom]We don't know that. Neither of us knows exactly what is going on inside Intel's deepest labs today.Afaik, Ivy Bridge P still uses 32nm. Scale that down to 22nm or 14nm. Swap back the decent cooling material Sandy Bridge had. Put the maximum power draw to 150W, and I'm sure you could easily reach 2+ Ghz on 20+ cores. Don't forget that Xeon processors are designed for longetivity, not raw speed like an OC'd gaming machine might have. I'm sure that an unlocked Xeon CPU (if such a thing existed) could be substantially overclocked, even on air cooling.Size does not matter, as long as it can fit on the motherboard. Expense is negated by mass production.How do you figure that? If some cores are inoperable, Intel does what they have always done. Shut off the non-working cores and sell the CPU at a lower price, advertised with less cores.That's assuming Intel isn't holding back, which I very much think they are.Once you go multi-process software, it doesn't matter if you have 2 cores or 20 cores. The software should easily scale across all of them. Take an example: Google Chrome uses multi-processing, putting each tab in its own process. Open 20 tabs in Chrome and you can already potentially use all 20 cores.Hyper-Threading provides the most benefit on a single core CPU. I've done multiple tests on an i3 with hyper-threading enabled and disabled. The largest difference was perhaps 5%. That's a dual core. I'd imagine the more cores you bring to the playfield, the greater the diminishing returns hyper-threading would have. It is my belief that hyper-threading is near useless. That's why I bought an i5 instead of an i7. If I had've bought an i7, I would have disabled the hyper-threading.Again, mass production negates the theory of waste. What workloads? Floating point operations? Current CPU's are not optimized like GPU's are, but they should be!-Interesting read, but can they compete with Intel? I would imagine they'd have a long way to go before they could reach that level.-But the lower power CPU would ONLY be capable of handling 100MFlops. Even if the higher power CPU may draws slightly more power at lower levels, fact remains, when you need that 1GFlop, you can have it, and that's what's important. The same can't be said for the lower power CPU.[/citation]
It'd be so big that it CAN'T be mass produced cheaply! There is a limit to how big a die can be before mass-production becomes infeasible and that'd be near your twenty core model. Ivy Bridge P is not 32nm unless it is on 32nm because it couldn't be done on 22nm with current technology. On that same note, mass production wouldn't stop any idea of waste even if it could be done. When you're wasting more than 90% of all produced die area, you're wasting money to an extreme. AMD wastes over 30% of every die with their FX-41xx CPUs and that was barely profitable. Shutting off nearly the entire die and selling as a much smaller number part isn't going to be profitable. It'd be selling at an extreme loss. Furthermore, binning wouldn't just be bad in the sense of faults. The entire die would have inconsistent silicon quality due to its great size and the frequencies would all be limited by the weakest links in the CPU's silicon...
Google Chrome does not put every single tab in its own process. It puts tab groups in single processes. Yeah, I've checked that. Furthermore, going multi-process and such has disadvantages. It means more memory unless you have master processes to manage it all and that then brings scaling in performance down.
You're tests are irrelevant because you don't understand what Hyper-Threading does. It doesn't work perfectly with all workloads. Those that can take advantage of it can have up to about 30% performance improvements on average regardless of the CPU's core count. You didn't see much difference because the i3s are already so fast that Hyper-Threading is not only not given a chance to be used, but some of the workload is also potentially being passed off to the graphics as well.
Your belief on Hyper-Threading is irrelevant because you don't understand it. Again, mass-production does not get rid of waste. Re-using damaged dies has a huge profitability drop. They are only used as a stop-gap from an entire loss on the die and do not make much profit at all. That is part of why Intel and AMD usually use multiple die masks for different performance levels. For example, Ivy Bridge on the consumer side alone has four or five different masks. Going into GPUs, we see that AMD has three GCN masks and two VLIW5 masks that are still in use in addition to one or two Trinity masks that have VILW4 GPUs. Nvidia has even more.
Current CPUs aren't optimized like GPUs are because that'd take huge die area and would limit the frequency to less than ~2GHz due to the nature of GPU optimization.
Tell me, what tests have you done on Hyper-Threading? To get mere 5% gains implies that you're doing something wrong. Intel still uses Hyper-Threading because it works, not because it's a gimmick. I know this from my own tests, experience, and because I rely on it in some of my work.
Basic summary: Such a large CPU die is not feasible. The only reason that GPUs can get so large is because they are designed optimally for low power consumption, but those same optimizations limit their frequency because it arranges components to save power at the cost of maximum performance. You couldn't just thorttle 60 cores and Turbo four of them to 4GHz. They wouldn't even work at such a high frequency and even if they did, the front end wouldn't be able to feed them and performance would still be bad because of it.
Also, I don't need to be in Intel's labs to say what I've said because THEY'VE TOLD US. The products with many cores use very basic architectures such as Xeon Phi. Even if they hadn't, I'm not so unfamiliar with the topic to not realize the limiting factors. To have so many cores, they use very simple cores, similar to a GPU, but not quite as simple. This isn't an everything is possible world. There are limits to what can be done with current technology and technology of the next several years. What you ask for is not feasible and it wouldn't even have much well-supporting software until five to twenty years from now anyway, if even then.
Intel isn't milking us anyway. They're simply trying to not surpass AMD by so much that they get attacked with anti-trust lawsuits from around the world for being a monopoly. That's a great part of why they have been focusing on power consumption instead of performance lately, granted it's not the only reason.