Intel's Ivy Bridge CPU Die Layout Estimated

Status
Not open for further replies.

ojas

Distinguished
Feb 25, 2011
2,924
0
20,810
15
I wonder if they'd keep the die size the same as sandy bridge, wouldn't that let them stuff more things like cache and IGP shaders (or whatever IGPs have) in the same space?
 

hardcore_gamer

Distinguished
Mar 27, 2010
540
0
18,980
0
1/3 rd of the die area is taken by the crappy GPU cores. They should have used it for additional logic or cache to speed up the CPU, atleast for the K series CPUs since the people who buy them use a discrete GPU anyway.
 
G

Guest

Guest
@hardcore_gamer
Ivy bridge and the rest of LGA1155 is ment for the low-mid range market segments, where people want the build-in graphics. Sure most people who buy the K CPUs probably wont ever use the GPU, but it would probably be too expensive to change the layout just for the K modules, not to mention that it would be a completely different design, which kinda goes against the tick-tock model.

If you wanted to use discrete GPUs, you should go with the SNB-E, Ivybridge-E LGA2011 series, atleast following Intels logic :)
 

killabanks

Distinguished
Oct 27, 2011
104
0
18,680
0
[citation][nom]memadmax[/nom]The tick tock model is why intel will always be the innovator, and AMD the follower...[/citation]
more like billions of $$ means intel will always be ahead of amd
 

fuzznarf

Distinguished
Sep 22, 2011
120
0
18,680
0
[citation][nom]hardcore_gamer[/nom]1/3 rd of the die area is taken by the crappy GPU cores. They should have used it for additional logic or cache to speed up the CPU, atleast for the K series CPUs since the people who buy them use a discrete GPU anyway.[/citation]
Yeah, you're right. Intel probably has no idea what they are doing....
 

bartholomew

Distinguished
Oct 22, 2011
1,061
0
19,660
175
[citation][nom]killabanks[/nom]more like billions of $$ means intel will always be ahead of amd[/citation]
:D & Only the ones with lots of $$ can afford their best CPUs :p
 

builder4

Distinguished
May 7, 2011
47
0
18,540
1
[citation][nom]hardcore_gamer[/nom]1/3 rd of the die area is taken by the crappy GPU cores. They should have used it for additional logic or cache to speed up the CPU, atleast for the K series CPUs since the people who buy them use a discrete GPU anyway.[/citation]

They don't want to completely smash AMD by speeding up the CPU. Intel relies on AMD's continual existance to prevent the government splitting them up on monopoly grounds. As long as AMD is non competitive, intel won't increase the CPU performance of their mainstream chips.

They do, however, offer a larger die and more CPU cores replacing the GPU on the 2011 platform for 3x the price.
 

mindless728

Splendid
Jul 15, 2008
4,074
0
22,960
89


not everynody buys the K series for gaming, I bought one for a minecraft server just to OC it to a pale 4GHz (don't need extreme), as it is a lot cheaper then buying server grade and also let me OC if need be
 

fuzznarf

Distinguished
Sep 22, 2011
120
0
18,680
0
For the mentally-inept who downgraded my previous post. the 'crappy' GPU is on the chip because it isn't just used for gaming performance.
Intel has integrated the 2 for a common purpose... but i guess the internet-cpu experts on this forum know better than intel.

"Zhou's solution was to have the CPU do the leg work by determining what data the GPU needs and then going and retrieving it from off-chip main memory. This in turn leaves the GPU free to focus on executing the functions in question. The result of this collaboration is that the process takes less time and simulations have found that the new approach yields an average improved fused processor performance of 21.4 percent."

get a clue. http://www.tomshardware.com/news/CPU-GPU-APU-fused-processor-performance-boost,14653.html
 

tomfreak

Distinguished
May 18, 2011
1,334
0
19,280
0
[citation][nom]ojas[/nom]I wonder if they'd keep the die size the same as sandy bridge, wouldn't that let them stuff more things like cache and IGP shaders (or whatever IGPs have) in the same space?[/citation]If u take a look at the measurement, they certainly have enough room for 2 more cores.

Ivy bridge would have been hexa core if bulldozer has SB-E performance. I want a Hexa core, but not socket 2011 expensive platform. IF I only wish AMD CPU engineer are as good as their GPU ones.
 

EDVINASM

Distinguished
Aug 23, 2011
247
0
18,690
2
Nothing fancy in Ivy Bridge design IMO. I was hoping for huge increase in rendering and encoding performance but if the rumours are true mere 20% is not worth it. Better get myself 2600k and overclock it to bits :) RIP Ivy. Will have to wait for next architecture to show me some 2x increase.
 

fuzznarf

Distinguished
Sep 22, 2011
120
0
18,680
0
[citation][nom]edvinasm[/nom]Nothing fancy in Ivy Bridge design IMO. I was hoping for huge increase in rendering and encoding performance but if the rumors are true mere 20% is not worth it. Better get myself 2600k and overclock it to bits RIP Ivy. Will have to wait for next architecture to show me some 2x increase.[/citation]
This will only be the first production cycle of ivy. 3D gates are a phenomenal advancement. Plus much lower power consumpton and cooler running. and this is a ~20% increase over the 3900 series Sandys also...
 

kartu

Distinguished
Mar 3, 2009
959
0
18,980
0
[citation][nom]memadmax[/nom]The tick tock model is why intel will always be the innovator, and AMD the follower...[/citation]
Right. And also they know how to make offers no partner can refuse, P4 Prescott, that was:

1) slower
2) more expensive
3) AND consumed much more power

outsold Athlons like 3 to 1, or 4 to 1?
 

vittau

Distinguished
Oct 11, 2010
221
0
18,690
1
[citation][nom]hardcore_gamer[/nom]1/3 rd of the die area is taken by the crappy GPU cores. They should have used it for additional logic or cache to speed up the CPU, atleast for the K series CPUs since the people who buy them use a discrete GPU anyway.[/citation]
More cache doesn't necessarily mean better performance, in fact, it can even cause WORSE performance. You realize the processor has to search the cache when it needs information, right? There are many different algorithms for that, but to sum it up, the larger the cache the longer it takes.
 

belardo

Splendid
Nov 23, 2008
3,534
0
22,790
2
@hardcore_gamer: read what others have posted, especially fuzznarf. The GPU can be used by software for encoding video as well as decode. So what may take 4 cores 1 minute to do, with the GPU helping out - it may take only 30 seconds (guess example).

@memadax: "The tick tock model is why intel will always be the innovator, and AMD the follower..."
AMD was the leader during the P4 vs Athlon XP~64~x2 era in every technical way. Intel's huge market share kept AMD out of many markets (Dell) for which intel was sued and lost.

When intel did this, they kept AMD from making profits which in turn means more R&D... intel illegally kept AMD out of the market. By the time Core2 came out, AMD was approaching 30% market share - go into Office supply stores and 4 out of 5 desktops were AMD.

Core2 knocked the hell out of AMD and with its lower price, they kicked AMD in the balls... over and over.

AMD's choice to do a P4 type tech in their FX Chips was stupid... and why many AMDers have gone to intel. AMD still makes good products... just forget the high end.

@Kartu: Intel out sold AMD 4 to 1. Currently AMD still retains a 20+% share of the market.
This is actually PRETTY damn good considering.
A) AMD doesn't have a top performer
B) AMD doesn't do ANY ANY Advertising... when was the last AMD TV Commercial? I think never.
C) Intel is very active in adverting. Easy to catch 1-2 intel ads a day...
 

hardcore_gamer

Distinguished
Mar 27, 2010
540
0
18,980
0
[citation][nom]belardo[/nom]@hardcore_gamer: read what others have posted, especially fuzznarf. The GPU can be used by software for encoding video as well as decode.[/citation]

QuickSync is a separate fixed function hardware for video encoding/decoding, it doesn't use the GPUs in the processor.
 

EDVINASM

Distinguished
Aug 23, 2011
247
0
18,690
2
[citation][nom]fuzznarf[/nom]This will only be the first production cycle of ivy. 3D gates are a phenomenal advancement. Plus much lower power consumpton and cooler running. and this is a ~20% increase over the 3900 series Sandys also...[/citation]

Strongly disagree. According to benchmarks in pure CPU performance Ivy Bridge should lead around 15-20% over 2600k. Obviously there might be much stronger CPUs but I think considering price point (and the price of Ivy isn't going to conquer Sandybridge since so many issues with architecture and delays) Sandybridge is a clear winner to me personally anyway. Until 2nd gen Ivy Bridge or something way faster comes out I am sticking to LGA1155 and 2600k.
 
G

Guest

Guest
So much misinformation on this site as always. The GPU is hardly 'crappy'. What an insane amount of stupid went into that statement. In fact, the GPU is quite capable. Performs well in DX11 and is 3x faster than Sandy Bridge in 3dMax. For the rest of the world that doesn't do hard core gaming (this world does exist and in abundance), it is a completely capable GPU that will handle all of their needs.

Secondly, they will make a processor without the GPU in the 4th quarter like they always do. You can wait for that one if that is what you want, but by then, you will thirst for Haswell and skip it (while subsequently calling its graphics crappy when it comes out).

Thirdly, IvyBridge is not delayed due to production issues with the "22 nM". Do you have ANY reliable source for that? Intel said it is in fact shipping the IvyBridge in April, but will not ship volume 2 core CPUs until this summer. Apparently, you can still get the 4+1 and 4+2 models in April. It is the cheaper models that won't be available. The likely culprit is the overstock of SandyBridge CPUs. They would be killing their OEMs by not letting them sell off their cheaper stocked CPUs. The OEMs would have to take a nasty loss just to clear inventory. That is not good for the bottom line of Intel's best customers. It is not AMD's fault for not competing or Intel's fault for fab issues. It is the economy that is hurting.

Lastly, QuickSync exists in the GPU. It is separate logic, but without the GPU, you do not have QuickSync.

< /thread>
 
G

Guest

Guest
Edvinasm. 20% faster is A LOT. Also, the GPU is up to 3x faster. It also utilizes less power ( a lot less ). It is a big upgrade to the 2600k man. If you can get 2600k really cheap, then yes, it is a winner. I don't see how you could go wrong with either. I personally am thinking Sandy Bridge for desktop and IvyBridge for laptop for increased battery life and graphics performance.
 

EDVINASM

Distinguished
Aug 23, 2011
247
0
18,690
2
[citation][nom]iAmRobot[/nom]Edvinasm. 20% faster is A LOT. Also, the GPU is up to 3x faster. It also utilizes less power ( a lot less ). It is a big upgrade to the 2600k man. If you can get 2600k really cheap, then yes, it is a winner. I don't see how you could go wrong with either. I personally am thinking Sandy Bridge for desktop and IvyBridge for laptop for increased battery life and graphics performance.[/citation]

A lot if you are gamer on budget or looking for a decent SOC laptop. However, I need CPU for rendering for reasonable price. If my render takes 60min on SandyBridge Stock and 42min on IvyBridge Stock, tbh I don't care. The wait isn't worth it. Plus, OC on my side I already have the raw power I need to encode my videos in
 
Status
Not open for further replies.

ASK THE COMMUNITY

TRENDING THREADS