Intel's next-gen Sandy Bridge summarised

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

ares1214

Splendid
I have a feeling it does help with stock performance, consolidating everything. I cant see why, but i honestly cant think of another reason to do it really. Im sure intel would know it would kill overclocking for all intents and purposes, so thats the only reason left.
 

4745454b

Titan
Moderator
Why intel did it? Stated reason or real reason? I'm not sure they gave us a stated reason, and real reason remains unknown. They probably did it to prevent Ocing, but I get the feeling there was a different reason behind it.
 


I STRONGLY disagree. CPU's, as Intel has found out via larabee, are not good architectures for rendering. The process of rasterization, as well as most GPU calculations, favor an architecture with lots of weak cores, as opposed to a few powerful ones.

Farther, Ray Tracing is the future, which even more favors lots of weaker cores (albiet far more powerful then we have now...). In short: The GPU isn't going anywhere.
 


I don't think it makes much of a difference - most oc'ers (which make up a very tiny part of the market compared to Joe Blow buying some PC at Walmart or Best Buy) will go for the unlocked versions, assuming they are just a few bucks more than the locked ones.

IIRC most clock generators are pretty simple dividers or multipliers connected to a master PLL-type clock generator, and they don't take up much die space. If you have to run multiple clock lines to a part of the chip, however, that can take up space (either on-plane or in another connection layer). My bet is that the various generators are there, just disabled by Intel on the non-oc'able versions. Maybe some OEM will discover a way to re-enable them, sorta like unlocking extra cores on an X2 or X3 :D.
 

4745454b

Titan
Moderator
Perhaps, but I doubt it. The OCing chips don't allow OCing by changing the bus, but the CPU multiplier. The bus has to stay at 100 for everything else to work. I honestly get the feeling that Intel is telling us the truth on this, there is only one PLL on the chip. And it runs everything. This is almost a bit like the early days of the P4 where you couldn't OCing past 220MHz or so. (start at 200) 217MHz was a common stopping point. Some manufacturers were able to find a way past the problem, but it was a different problem.

Anyone seen any Intel papers on this? I'd love to know more about this. Pure guess on my part is that there is an issue running that many PLLs so close together? Or when using transistors this small?
 

archibael

Distinguished
Jun 21, 2006
334
0
18,790
While I can't speak to Sandy Bridge in particular, I will say that having multiple different clock domains incoming to a chip throws a significant monkey wrench into simulation (and, hence, pretty-much all pre-silicon debug efforts), and the interclock skewings tend to make post-Si validation a bitch as well.

It's better with more clocks being generated internally, but it's still no picnic.
 


Yup, that's what I remember from a VLSI design course I took in college many moons ago :p. Sometimes it seemed like it would be simpler to just go with independent clocks and asynchronous interface logic at the inputs to each domain, but then you get into metastable state problems and possible race conditions.
 

amtgreen

Distinguished
Aug 31, 2010
3
0
18,510
I understand that the SB integrated graphics beat dedicated gpus. How will this affect the notebook configurations? is SB going to be paired up with high-end dedicated cards or not?

I'm currently waiting for these year's Intel and nVidia releasess to buy a notebook. I want to upgrade from a desktop Intel Celeron 2Ghz, 1GB RAM, ATI 9600 series card. So is it worth waiting for SB in 2011 as far as performance gain vs a let's say a laptop with core i5 450, 435m? I'm not a hard core gamer, but I want to play from time to time and I found that new games have a 2,4 ghz listed processor speed at system requirements. I need the notebook for 3d animation - animation, mid-quality rendering.
 
I understand that the SB integrated graphics beat dedicated gpus. How will this affect the notebook configurations? is SB going to be paired up with high-end dedicated cards or not?
For true gaming laptops, SB will not be a replacement for a top end dedicated GPU when gaming. However, if switchable graphics tech is implemented, even a gaming laptop should be able to get quite good battery life. SB is aimed at the low end dedicated GPUs such as the GTS210/220,5450,etc.
 

dragon5677

Distinguished
Apr 28, 2010
158
0
18,680
It seems like core i7 was just to buy time for sandy.
its like core i7 is an incomplete version of sandy and also a major preview into sandy.
i believe that amd never made phenom's to compete with i7 but with core 2 quads.
in my opinion intel wanted money and time to develop the full version of sandy, and released i7 as a diversion from the real thing and then released 980x which is closer to the sandy
 

ares1214

Splendid


Nope, actually Intel made the i7s 2 years ago. Its just natural course of tech progression.
 

whitey_rolls

Distinguished
Oct 29, 2010
135
0
18,690


Actually the next processor after the Sandy Bridge line will be code named "Haswell".

It's expected to be released around 2013, the standard process for the line will be 8 cores, with I believe the option to go up to 16 cores with hyperthreading, the processor itself will be created on a 22nm dye like the Ivy Bridge, with the second generation "Rockwell" coming out on 16nm dyes.

And the wheels of technology keep on spinning.....