LGA 1156 Core i5

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Its just possible Intel could be marketing here, big surprise there. Having gone from 65nm to 45nm, and say, having the same amount of silicon used for the IGP, then itd double the tranny count, which in theory should almost, or double or possibly even more than double its abilities. Claiming that its the new interconnect that brings these speedups could simply be marketing, and the improvements on the IGP is what were truly seeing, and thered be no way to disprove it, unless they decide to make it off die.
 
The i5 arch wont have it due to costs, and then take what I said from there. QPI doesnt enter into it, more the on die IGP only.
Id remind you, having 2 slots for multi card will reduce your PCI to 8x only, which by next gen, will limit your highend cards on i5
 
Everything I've heard is that they are going to be the same as Core i7 just a little bit less hard core, cheaper, and more mainstream. They are supposed to use the LGA1366 socket so that people can upgrade later to a Core EYE7.
 


Lol, everything you heard was wrong. i5 uses LGA 1156
 


Generally Intel fabs their northbridges on a process that is one node behind their CPUs and southbridges are two nodes behind. I know the Intel 4-series northbridges are 65 nm, which makes sense as they debuted with 45 nm C2Ds. The 965 and 3-series units are 90 nm (debuted with 65 nm C2Ds) and the 945, 955, 975 are 130 nm (debuted with 90 nm Pentium Ds.) I don't have hard data on what the individual southbridges are, but I would predict ICH9 and ICH10 are 90 nm units. For reference, AMD's 700-series northbridges are 55 nm and their SB700 and 750 are 130 nm.



You have two competing factors. One is the cost of making the dies, where smaller dies = less cost and the other is the cost of the fab equipment, where smaller transistors = more cost. So basically it boils down to smaller processes being less expensive to make large, fast, expensive components like CPUs and GPUs and larger processes are less expensive to make small, slow, cheap parts like southbridges, DSPs, peripheral controller chips, etc.

The comment about GPUs being less expensive to make at 40 nm than 55 nm also brings in another factor. We were talking about Intel before, and Intel fabs pretty much everything they make themselves, so they want to be able to get more life out of already-paid-for older equipment. AMD and NVIDIA were the ones talking about 55 nm -> 40 nm and they do NOT make any GPUs in-house- NVIDIA doesn't even have any fabs at all. They contract out to TSMC or other makers that probably have plenty of work making cell phone chips and such to keep their older fab equipment busy, but don't have a whole lot else in comparison running on the high-dollar state-of-the-art fab lines. Plus, GPUs fall in that big, expensive, fast component category and the reduction in die area more than offsets the price to fab at the smaller transistor size.
 


Ummmmm instead of re-fitting there plants for billions of dollars they simply use old "state of the art" tech like now for example (as i said) 45nm current, but chipsets can be produced on old tech fine - 90nm, 65nm etc

As for cooling - the G45 chipset (90nm???) with the memory controller, pcie, FSB, dual DDR2 channels and the IGP does ~30w of heat TOPS (with all that) - the IGP its self may only do 10w for TDP, even if it did 50w the HSF could still handle it no worrys - there designed for ~100w and these are basic cpus below 65w etc

Its not as if its another 4 processor cores (easy aswell to cool etc)



Oh btw the Integrated PCIe controller - that thing just may give this chip/platform another hard boost 😀 lower latencies for PCIe (aka your video card, aka your games! - data is quicker/faster to get there, less chipset delay/cross talk etc)
 


The 4650 has what, 128-bit memory (DEDICATED), lots of sillicon going into the GPU and proper drivers (more then likely 100 generations old atleast) - Intels has minimal die size, shared memory (limited bandwidth) and Intel drivers - id say Nvidia 6200 series performance tops.
 
I'm not really familiar with this Larrabee stuff. Is it going to be desktop or workstation? Are we going to see Blue entering the Red vs Green battle?
 
A 4xxx series IGP would most likely come in under 200nm2 at 45nm, or small enough.
LRB so far appears to be chugging along, and its strength may not be seen with the first release. Itll do both , rendering and a crunching monster, looks to do well, as theres no latency hiding, other than in one area, and it scales extremely well, and is why the 32nm node will make it a success, while at 45nm, itll struggle to compete highend anyways. Its said to need at least 64 cores to truly be able to show its stuff, and a 64 core at 45nm is I believe too high in power usage for what Intel is trying to do, thus 32nm looks like the true answer
 


Thanks, I didn't remember the details but I remembered reading that it wasn't just Intel doing this - just about all fabs that have been around long enough to have older equipment, move their less-demanding & cheap production to the older equipment when going to a new node. Only makes sense from an economic viewpoint.
 


Yea but it will be like the current 478 for mobile. Just a older socket used for mobile purposes. I mean on the DT. 775 is dead.



They have had them for some time. But they consume a lot of power TBH and are only good for a gaming laptop that would need to be plugged in to really game.

And as said, I am sure that if you plug in a discrete GPU that the IGP can be completely turned off to cut power and heat. It would only make sense.
 






Problem is we wont know till later because, well we don't know what Intel is doing. Are they doing high frequency cores? We don't know.

Hell they could be using what they learned from the Terascale CPU they made that was 80 cores all @ 2.5GHz each using 62w under load and @ 45nm (if I remember corrrectly). If so then 64 CPUs @ 45nm wouldn't be that impossible.
 
If i remember correctly even the cores on LRB have hyperthreading so reguardless of core count, if there is more IPC going down and some hardcore memory controller and proper cache sync etc between all the cores (what Intel has been going on about, and what seems to also be giving i7 on the server front a 80% boost) it could be interesting.

Wouldnt ATi and Nvidia be pissed off if an Intel first attempt beat them all, even if it did chug power and crap heat like a prescott 😀