hafijur :
MU_Engineer
1. 3770k and 4770k are both really low end cpus hence you get top end laptop cpus with same performance at peak performance. They could if they wanted to release 6 core cpus in the mainstream now but thety are maximising profit margins.
The desktop i7s are not low end SKUs. They are actually some of the higher end units as the SKU list goes Celeron -> Pentium -> i3 -> i5 -> LGA1150 i7 -> LGA2011 i7. Also like I stated before, Intel is pretty well tapped out as to how fast they can crank the Haswell/Ivy Bridge arch on their current 22 nm process because they optimized for low power over all else. This is why laptop and desktop are similar speeds and power dissipations- LGA1155/1150 Haswell/IB is a laptop CPU and doesn't have much clock headroom which can be exploited on the desktop. They couldn't make it much faster even if they wanted to without significant changes to the manufacturing process and/or chip macro-architecture.
Also, Intel may not be able to very easily spit out a 6-core CPU which meets the <$200 and 3+ GHz base clock requirements of the mainstream segment. If they were going to do so now, it would probably be a 32 nm Sandy Bridge variant rather than a 22 nm Haswell as I am guessing their 22 nm process isn't up to the task yet. Intel has been pretty slow in ramping up production with the last two nodes. They have introduced relatively mediocre processors with small die sizes and low clock speeds at first- 32 nm came with the 2C Clarkdale and 22 nm came with the 2C/4C Ivy Bridge. Notice that they didn't intro the arch on very high-end, high-clocked, many-cored chips. This is because making small, low-clocked chips allows for you to have some yields despite a very immature process. It is only when a process is much more mature does Intel open it up to much larger chips with higher clock speeds like SB-E. I don't think it's AMD being not competitive as AMD is able to equal or beat the quad-core i7s with their FXes costing 2/3 as much in heavily multithreaded tasks. You'd think Intel would allow a six-core i7 to sneak down into the $250-300-ish territory to stave off the 4-module FXes if they could do so easily as that would shut down AMD, as they did with the newly-inexpensive Q6600s when the Agena Phenom X4s debuted. But they don't. I guess they probably can't or else they probably would do so.
2. 32nm sandy bridge is 70-80% better performance per watt then piledriver in terms of kilojoule to complete the same task.
Way to mix units there buddy (golf clap.) Sounds like you are repeating some marketing points somebody threw out there. Joules are a
quantity of energy while watts are a
rate of energy usage, aka power. I suppose you are trying to say that on some benchmark Sandy Bridge (which we weren't even really talking about, we were discussing Haswell) finishes earlier and uses less energy to complete some unspecified task than Piledriver. Without the particular setups and task used, it's impossible to arrive at a meaningful figure for energy efficiency. Cite your source and we'll discuss.
3. Intel already shown with haswell 8 core announcement they can crank the power consumption up if they wanted. Eben that 8 core haswell cpu will probably take less then the fx4300 power consumption wise. Intel are in a position where they could release easily a 16 core cpu at same power consumption as the fx9000 series cpu if they wanted to. The new ivy bridge e and haswell e has a better heat spreader I believe which will solve all heat problems.
Intel might have announced an 8-core Haswell-E* unit but they sure haven't shipped it. We don't have a clue to how it clocks, how it performs, how hot it runs, and if it is any better or worse than Sandy Bridge-E*. Shoot, we don't even know what socket it will use (rumor is a new 2011 land socket called "Socket R3.") We only have the current 2C/4C Socket G2/BGA laptop and LGA1150/BGA Haswells to compare against.
I would highly doubt that Intel can even release a 16-core CPU presently. It would be too large to get more than a handful of viable chips out of a wafer. The current 32 nm process is what Intel uses for its >4 core CPUs, presumably because it's more mature and can yield larger chips better than 22 nm. An 8-core SB-E is around 400 mm^2 in size. A 16-core unit would be twice that size, an >800 mm^2 utterly unyieldable chip. A 22 nm 4-core Haswell is 177 mm^2, which if you made a 16-core variant would likely be in the 600 mm^2 range if you stripped out the IGP. That would be very, very tough to yield, especially on an immature process. Rumor has the $4000+ top-line Haswell-EXes having 12 cores ($4000+ Westmere-EXes on 32 nm have 10 cores maximum.) That sounds more reasonable. Intel would need to steal a page from AMD and incorporate on-die QPI links to tie two separate 8-core dies together in an MCM package to make a 16-core chip such as AMD does with the 16-core Opteron 6300s. That would still be really ugly as that chip would have 8 memory channels to route from the socket. Hello 3000+ land sockets and EATX single-socket boards!
4. There was no p4 at 4ghz at stock and I know how great these pentium m machines are, the 1.3ghz pentium m destroyed a 1.6ghz p4 and 2.66ghz p4. I had a 1.5ghz one as well but the thing was these took 1/6th the power could have 10 hour battery life on ibm r40 and was so light platform with centrino as well as with the extra cache of the p4 and shorter pipelines embarrassed p4s especially at gaming. A top end 2ghz or 2.26ghz pentium m will destroy a p4 3.8ghz at gaming any day of the week.
Pentium m cpus were like years ahead of the time, nothing was even close to as good in the market as them for a good few years.
Pentium 4s could get well over 4 GHz, particularly in later iterations. The fastest stock P4 was 3.8 GHz for crying out loud, and that was a 90 nm Prescott, not even a later 65 nm Cedar Mill, one of which held the world overclocking record for several years until unseated by an FX-8150. A 1.3 Banias was not faster than a 2.66 P4B in very many tasks, either, nor did any computer from 2003-2005 have a 10 hour battery life unless you had multiple batteries adding up to at least 200 Wh in total capacity. The rest of the chipset, hard drive, display, and such were too inefficient. You are only starting to see that today with CPUs which are much more efficient than any P-M running on much more efficient chipset, LED-backlit displays, and with SSDs instead of HDDs. And that's essentially an "active idle" measurement with an 80+ Wh battery, not the one with half that capacity that you'll see in an "ultrabook."
Pentium Ms were not especially ahead of their time, and there were certainly chips shortly after it that exceeded their performance. The first Pentium M Banias was simply a PIII-M Tualatin with a few extra tweaks for power consumption, twice the L2 cache, and the original P4's FSB and SSE2 capabilities. Performance per clock wasn't astoundingly better than Tualatin but it did clock a few hundred MHz higher. The second Pentium M, the 90 nm Dothan, was followed up by the Core Solo/Core Duo "Yonah." Yonah was essentially a die shrink of Dothan to 65 nm with SSE3 and a second core. It performed as well or better than Dothan per core and per clock. So the Pentium M wasn't anything particularly unique if you really look at it. The competing Mobile A64 and later the Turion 64 were every bit as fast if not faster per clock than the Pentium Ms. The big thing Intel had with the Pentium M was a real idea of platform marketing with the Centrino brand, as well as a moderate improvement over the Mobile A64/Turion 64 + NVIDIA chipset platform's power consumption. The P-Ms certainly weren't faster than the AMD K8s and it took the second follow-on to the P-M (the Core 2) to actually beat AMD's K8.
5. Intel is definately the leader in low power computing in terms of performance per watt nothing even comes close and finally the reason why power matters is I see constantly people recommend an fx8300 or fx6300 etc when these take on peak load around 90-100w then an intel competitive cpu. I think if you are an above average user an fx8350 will cost around £30 a year over an i7 3770k to run.
Like I originally said, power consumption doesn't really matter as the difference in load power consumption is a very small cost. The only reason that power consumption really matters is if you are really TDP-constrained such as in a very small form factor setup and absolutely cannot exceed a certain TDP point, lest you overheat the system. This is especially true with gaming setups. You are not pushing your system at 100% CPU load very much because you would have to be playing CPU-bound games 24/7/365. That simply does not happen. Your machine is going to be idling or turned off most of the time, in which case the difference in power consumption is zero (turned off) or largely dependent on what HDD/GPU/MB you picked rather that which CPU you are running as CPU-specific idle power is pretty similar between the two. You certainly are not going to make up the $150 or more in difference between the i7-4770K and a decent LGA1150 board and an FX-8350 and a decent AM3+ board in power costs at current electricity rates of roughly a dime per kWh unless you are talking about a time period of decades in real-life usage. The extra 50 watts or so a similarly-equipped Haswell system vs. a Piledriver system uses at full CPU load would pay for over three years of power for sustained 24/7/365 100% CPU usage. You aren't going to recoup the difference in purchase price, period. If you really cared about power usage while gaming, you'd use an IGP or lower-end GPU rather than a 200+ watt high-end GPU as GPUs have a much bigger effect on power consumption than CPUs. But yet I hear very, very little about GPU power consumption and efficiency, likely because nobody really cares due to the fairly small cost difference. Witness the fact that Tom's is currently running an article about using two 300 W GPUs on the front page for that fact. So I would give up the "the price of power makes it cost-efficient to spend way more on an Intel CPU" argument as it has been thoroughly debunked.