Intel Core i7-3770K Review: A Small Step Up From Sandy Bridge

Page 7 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]zeratul600[/nom]Hey this is a question for you guy that know a lot of stuff, i want to buy computer, my project it's a scalable one! here is the thing i need a motherboad that handle that scalability! seeing that those IB processor have a decent graphic muscle (at least better than my mobility radeon x600) i was wondering:It is a good a idea to buy a pc that start with the motherboard (cross fire or SLI capable), and IB cpu, a single 4 GB ram module , and over the time add some extra ram to achieve a triple channel setting, and adding discreet GPUs until i end up in a quad crossfire build.I plan to buy all the parts over the course of one or two years... Its this a good idea or just nonesense, im planning to play some games, but i also want the processing power from the gpus to run some calculations.. so what do you guys say?[/citation]

It's an okay thing to do... Basically, you want to start up small, and work the build into the high end as time goes by. A lot of people do similar.

However, Ivy only supports up to two memory channels and although the Ivy IGPs have more graphical processing power than the very old integrated junk like that x600, it still isn't really powerful at all. It's still weaker than the Radeon 5550 and even the 6530D in the Llano FM1 A6 APUs.

Also, I'm not sure if any Ivy Bridge motherboards can have four video cards, so if you want quad crossfire, you will probably need to get one or two dual GPU cards (one dual GPU AMD card can usually do CF with one or two single GPU cards with the same GPU)
 

zeratul600

Honorable
Mar 11, 2012
138
0
10,680
[citation][nom]blazorthon[/nom]It's an okay thing to do... Basically, you want to start up small, and work the build into the high end as time goes by. A lot of people do similar.However, Ivy only supports up to two memory channels and although the Ivy IGPs have more graphical processing power than the very old integrated junk like that x600, it still isn't really powerful at all. It's still weaker than the Radeon 5550 and even the 6530D in the Llano FM1 A6 APUs.Also, I'm not sure if any Ivy Bridge motherboards can have four video cards, so if you want quad crossfire, you will probably need to get one or two dual GPU cards (one dual GPU AMD card can usually do CF with one or two single GPU cards with the same GPU)[/citation]
Thank you for your kind and swift response, you left me wondering, does triple/quad channel-sli or crossfire are only allowed in those "extreme" builds (SB-E)???
Another thing which one is faster a single ati 6990 or a dual crossfire with 2x 6970? im asking because its seems easier to get some "cheap" 6970 over time than getting a 6990
 
[citation][nom]zeratul600[/nom]Thank you for your kind and swift response, you left me wondering, does triple/quad channel-sli or crossfire are only allowed in those "extreme" builds (SB-E)???Another thing which one is faster a single ati 6990 or a dual crossfire with 2x 6970? im asking because its seems easier to get some "cheap" 6970 over time than getting a 6990[/citation]

Triple channel is X58 (LGA 1366). X79 (LGA 2011, it uses the SB-E CPUs) is quad channel. The LGA 1155 (SB and IB), like almost all other platforms, is dual channel.

Quad SLI and quad Crossfire are possible on the LGA 1155 boards, it just usually means you need one or two dual GPU cards because there aren't enough PCIe x16 slots for three or four single GPU cards.

The 6990 and two 6970s in CF are roughly identical in performance.

Radeon 7870s are far faster than the 6970 for compute and are somewhat faster than the 6970 for gaming despite being the same price, newer, and using less power than the 6970, so I think that 6970s are really off of the list of decent options.

However, I don't think any quad crossfire 7870s have been made, so they can't do quad crossfire... Right now just isn't the best time for that. If you are going to build it up over a year or two, then this will all probably improve. I recommend that you start off with one or two 7950s.
 

qiplayer

Distinguished
Mar 19, 2011
38
0
18,530
do you know what? this processor allows pcie 3.0. I have a sandy bridge 2600k and p8p67 mobo, now I bought 2 gtx680 and the motherboard with the 2 8x slots bottlenecks the cards. So I'd have to change motherboard, and cpu.
WTF!!!!
one year ago my setup was at the top now i should change it!!!!!!!!!!!
like if it was old
(i have a triple monitor setup)
 
[citation][nom]qiplayer[/nom]do you know what? this processor allows pcie 3.0. I have a sandy bridge 2600k and p8p67 mobo, now I bought 2 gtx680 and the motherboard with the 2 8x slots bottlenecks the cards. So I'd have to change motherboard, and cpu.WTF!!!!one year ago my setup was at the top now i should change it!!!!!!!!!!!like if it was old(i have a triple monitor setup)[/citation]

You probably wouldn't see too much of a difference between PCIe 2.x and PCIe 3.x unless you add a third or fourth GTX 680 into the mix.
 
G

Guest

Guest
If I had a Sandy Bridge I'd keep it, but I'm upgrading from a Core 2 Duo.

My new build will fun Folding At Home 24/7 and for that I feel it's worth it to go ahead and make the jump all the way to the i7-3700k.
 
[citation][nom]jwyount[/nom]If I had a Sandy Bridge I'd keep it, but I'm upgrading from a Core 2 Duo.My new build will fun Folding At Home 24/7 and for that I feel it's worth it to go ahead and make the jump all the way to the i7-3700k.[/citation]

For folding? Definitely go for Ivy. Ivy's power savings over Sandy are the greatest when at load and folding should keep it loaded, so Ivy will drop the electricity bill a little compared to Sandy. I'd say that your feeling is spot-on.
 
[citation][nom]TheBigTroll[/nom]you can d quad-sli or crossfire if yo have a plx chip on your board bt its not recommeneded as the cpu will bottleneck the cards[/citation]

Wouldn't the PCIe interface be a bottleneck at that point, not the CPU itself? Without all GPUs connected through CF/SLI bridges, it would need to put a lot more data through the PCIe interface. Perhaps a Sandy Bridge E system with PCIe 3.0 enabled would have the bandwidth to handle it. It definitely seems like it would be interesting to test this and see how well it does. You would have four 7870s, two sets. Each set has a bridge. Each set talks to the other set through the PCIe interface because there aren't bridges that bridge the sets, only the two GPUs within each set are bridged by bridge connectors.

With PCIe 3.0, maybe there is enough bandwidth. Even if two 7870s overwhelm the connection, maybe two 7850s won't. Even if two 7850s do, if those 768 core Pitcairn GPUs really go into 7830s like some people think they might, then maybe these hypothetical 7830s won't. Am I mistaken in thinking that it is the PCIe interface that is then the bottleneck when the bridges don't connect all GPUs?
 
[citation][nom]TheBigTroll[/nom]i was refernecing the plx chip on the asus p8z77-ws board. it supports 4-way SLI on PCI-E 3.0. but then the bottleneck will be of the cpu[/citation]

How could an overclocked i5 or i7 be the bottleneck? they game almost exactly as well as the six core i7s do. If a SB-E would be enough, then so would an i5 or i7 quad core with a somewhat higher frequency.
 

army_ant7

Distinguished
May 31, 2009
629
0
18,980
I still do wish that Intel offered a cheaper or more powerful CPU without the built-in GPU. It seems like a big waste of time, work, and die space (and maybe even money) for, esp. now, such a big GPU portion. But maybe from the business/profit-spending point of view, it really is better for Intel as a company to have GPU's built-in irregardless.

I'm interested in hearing from anyone else's thoughts on this.
 
I don't consider that a valid argument. Ultrabooks aren't marketed to professionals as a replacement for power laptops. They're meant to be a bridge between the casual laptop and the tablet, something that's quick and easy to grab to check email or browse the web, but also powerful enough for basic content creation. I don't know anyone doing serious prosumer level creation, or higher, that uses a laptop as a primary rig without docking stations and large external monitors.
 
[citation][nom]army_ant7[/nom]I still do wish that Intel offered a cheaper or more powerful CPU without the built-in GPU. It seems like a big waste of time, work, and die space (and maybe even money) for, esp. now, such a big GPU portion. But maybe from the business/profit-spending point of view, it really is better for Intel as a company to have GPU's built-in irregardless.I'm interested in hearing from anyone else's thoughts on this.[/citation]

The GPU die area does provide a good amount of extra area for the heat to spread through before trying to get through that crap paste. Without it, Ivy might be a little hotter. Besides, a lot of (probably most) computers will use the IGPs, so it'd probably cost Intel more to make more models without it than to just sell them all with it.
 

army_ant7

Distinguished
May 31, 2009
629
0
18,980
[citation][nom]blazorthon[/nom]The GPU die area does provide a good amount of extra area for the heat to spread through before trying to get through that crap paste. Without it, Ivy might be a little hotter. Besides, a lot of (probably most) computers will use the IGPs, so it'd probably cost Intel more to make more models without it than to just sell them all with it.[/citation]

Well about the GPU die acting as a heat spreader is interesting, though they could probably use the space it takes up for a more efficient heat spreader if that were only its purpose.
The though about most PC's using IGP's is really true and I've though about that, but I was thinking more of having separate models—some with IGP's others without. This would be similar I guess to Sandy Bridge and Sandy Bridge-E, though having a different socket isn't really that good in my opinion. But yeah, it may cost them more, thus it might be more economical for them.
Anyway, I really appreciate your thoughts blazorthon. :) Thought that no one would take notice.
 

SuperVeloce

Distinguished
Aug 20, 2011
154
0
18,690
I completely agree with blazorthon. CPUs without IGP would not have any larger or smaller CPU die part. It would not be any better for customers. All desktop versions of Sandy have IGP in it (working or not is another story). But Sandy-E is basically a Xeon part. So Sandy-E is not unique to anything else. It sells in to low numbers.
 
[citation][nom]SuperVeloce[/nom]I completely agree with blazorthon. CPUs without IGP would not have any larger or smaller CPU die part. It would not be any better for customers. All desktop versions of Sandy have IGP in it (working or not is another story). But Sandy-E is basically a Xeon part. So Sandy-E is not unique to anything else. It sells in to low numbers.[/citation]

Well, if the die was made without the IGP, then it could cut the die size of Ivy almost in half. My point was that having it there, in use or not, could provide more area for the heat to spread through before it has to pass through the paste and into the IHS. Without the IGP, it would cost less to manufacture, but how much less, I don't know. Manufacturing costs are probably pretty darn low already, so it probably doesn't make much difference. It would probably run even hotter than it already does despite the lower costs because of the reason I stated, so it might not be a favorable thing to do, especially because most people use the IGP in one way or another (even if it's just for quick-sync or something like that instead of as the primary GPU for the system).
 

SuperVeloce

Distinguished
Aug 20, 2011
154
0
18,690
It is more economical to have only one version of the die, rather than two (that's what Intel is saying anyway) and to sell them according to the working parts of the specifical die. CPU part would be the same size, but complete die size would be "wholeSandy minus IGP". Now, for the heat dissipation, here is the catch: If you wanted to have more space to dissipate, they would need to resize a space between two transistors. I think it's about 90nm on Ivy (and 22nm is the size of the transistor itself). So the die would then be quite larger, than cpu part of the Sandy-Ivy die.
 

army_ant7

Distinguished
May 31, 2009
629
0
18,980
[citation][nom]SuperVeloce[/nom]I completely agree with blazorthon. CPUs without IGP would not have any larger or smaller CPU die part. It would not be any better for customers. All desktop versions of Sandy have IGP in it (working or not is another story). But Sandy-E is basically a Xeon part. So Sandy-E is not unique to anything else. It sells in to low numbers.

It is more economical to have only one version of the die, rather than two (that's what Intel is saying anyway) and to sell them according to the working parts of the specifical die. CPU part would be the same size, but complete die size would be "wholeSandy minus IGP". Now, for the heat dissipation, here is the catch: If you wanted to have more space to dissipate, they would need to resize a space between two transistors. I think it's about 90nm on Ivy (and 22nm is the size of the transistor itself). So the die would then be quite larger, than cpu part of the Sandy-Ivy die.[/citation]

But couldn't they utilize the space freed up from the IGP to add more cores, add maybe more features (not sure about this), or, like you said, use it for more heat dissipation? But yeah, it might be just 'coz of that reason you said Intel said, which is pretty much what we thought as well.
What you said about the space being 90nm in Ivy Bridge or even just the fact that there's a space in between transistors intrigued me. It's just that I never thought about the spaces between them. I'm wondering now what can be found in the spaces between those transistors, air, a vacuum, a heat (but not electrical, conductive material?
Also, in what way, aside from having server/workstation-oriented functions, are Sandy Bridge-E and Xeon "basically the same"? I know that the benchmarks showed how SB-E behaved like SB in terms of performance (single-core at the same clock) so that makes me think they pretty much have the same architecture. I have a feeling that the current Xeons are somewhat based on SB as well? Sorry, I'm not familiar with the Xeons aside from what they're usually used for.
Sorry for asking so many questions. I'm really intrigued by these stuff, and I bet most if not all of you are. I want to become a Computer Engineer in the future. Hehehe...
 
[citation][nom]army_ant7[/nom]But couldn't they utilize the space freed up from the IGP to add more cores, add maybe more features (not sure about this), or, like you said, use it for more heat dissipation? But yeah, it might be just 'coz of that reason you said Intel said, which is pretty much what we thought as well.What you said about the space being 90nm in Ivy Bridge or even just the fact that there's a space in between transistors intrigued me. It's just that I never thought about the spaces between them. I'm wondering now what can be found in the spaces between those transistors, air, a vacuum, a heat (but not electrical, conductive material?Also, in what way, aside from having server/workstation-oriented functions, are Sandy Bridge-E and Xeon "basically the same"? I know that the benchmarks showed how SB-E behaved like SB in terms of performance (single-core at the same clock) so that makes me think they pretty much have the same architecture. I have a feeling that the current Xeons are somewhat based on SB as well? Sorry, I'm not familiar with the Xeons aside from what they're usually used for.Sorry for asking so many questions. I'm really intrigued by these stuff, and I bet most if not all of you are. I want to become a Computer Engineer in the future. Hehehe...[/citation]

Sandy Bridge E and Sandy Bridge use the same architecture for the CPU cores. The difference is in the L3 cache, amount of cores, and features (PCIe is on-die, double the on-die memory controllers, etc). That is why they have nearly identical performance per core at the same frequency (Sandy Bridge E's increased cache and memory bandwidth increases performance per core per Hz slightly, but not much). Sandy Bridge E is basically a cut down Sandy Bridge EP Xeon CPU.

[citation][nom]SuperVeloce[/nom]It is more economical to have only one version of the die, rather than two (that's what Intel is saying anyway) and to sell them according to the working parts of the specifical die. CPU part would be the same size, but complete die size would be "wholeSandy minus IGP". Now, for the heat dissipation, here is the catch: If you wanted to have more space to dissipate, they would need to resize a space between two transistors. I think it's about 90nm on Ivy (and 22nm is the size of the transistor itself). So the die would then be quite larger, than cpu part of the Sandy-Ivy die.[/citation]

22nm is the space between the transistors. Different types of transistors have different sizes, not specific sizes like the space between them and their size is not always related to the space between them. Large transistors have far less benefit from a smaller process node than smaller transistors because of this.
 

army_ant7

Distinguished
May 31, 2009
629
0
18,980
[citation][nom]blazorthon[/nom]Sandy Bridge E and Sandy Bridge use the same architecture for the CPU cores. The difference is in the L3 cache, amount of cores, and features (PCIe is on-die, double the on-die memory controllers, etc). That is why they have nearly identical performance per core at the same frequency (Sandy Bridge E's increased cache and memory bandwidth increases performance per core per Hz slightly, but not much). Sandy Bridge E is basically a cut down Sandy Bridge EP Xeon CPU.

22nm is the space between the transistors. Different types of transistors have different sizes, not specific sizes like the space between them and their size is not always related to the space between them. Large transistors have far less benefit from a smaller process node than smaller transistors because of this.[/citation]

Oh! I think I have read about SB-EP before. So on SB the PCI-E is on the PCH but on SB-E it's on the CPU die? I think I've read otherwise, but I may be wrong. I knew those other facts but thanks still. Jus a clarification, L3, or at least its size, doesn't always provide benefits (or maybe just not that noticeable in benchmarks) depending on the kind of workload according to what I've read, but I guess it helps increase performance per Hz per core generally.

About it being 22nm, so what you're saying is that the space between the transistors and not the transistors themselves shrunk to 22nm? It sounds somewhat reasonable to me if there are different kinds of transistors with different sizes since they can't be all 22nm, but I feel skeptical since I've always thought/learned that it is the actual transistor size.
 
[citation][nom]army_ant7[/nom]Oh! I think I have read about SB-EP before. So on SB the PCI-E is on the PCH but on SB-E it's on the CPU die? I think I've read otherwise, but I may be wrong. I knew those other facts but thanks still. Jus a clarification, L3, or at least its size, doesn't always provide benefits (or maybe just not that noticeable in benchmarks) depending on the kind of workload according to what I've read, but I guess it helps increase performance per Hz per core generally.About it being 22nm, so what you're saying is that the space between the transistors and not the transistors themselves shrunk to 22nm? It sounds somewhat reasonable to me if there are different kinds of transistors with different sizes since they can't be all 22nm, but I feel skeptical since I've always thought/learned that it is the actual transistor size.[/citation]

Well, I could be wrong about the PCIe lanes being on the CPU chip, but I'm pretty sure about that. On the 22nm being transistor size or space between transistors, that I know for a fact to be the space between the transistors.

The increased L3 can help greatly in some tasks and make next to no difference in others. It all depends on the workload. High amounts of L3 generally help server and workstation workloads the most. That's why it's generally the Xeons and Opterons that have the most cache.
 
Status
Not open for further replies.