Intel's next-gen Sandy Bridge summarised

ksampanna

Distinguished
Apr 11, 2010
1,284
0
19,360
Just found out some details & benchmarks about Sandy Bridge. Here's what should be of interest:

1) Clock for clock, Sandy Bridge seems to offer a 10% increase in performance compared to similarly priced Nehalem processors (forget the Thubans)

2) The 32 nm tech makes them post even lower overall power consumption than the current Lynnfields (& obviously much lower than AMDs)

3) The integrated graphics are good. It's fast enough to put all previous attempts at integrated graphics to shame and compete with entry level discrete GPUs. If you were planning on spending $50 on a GPU - you may not need to with Sandy Bridge. Of course, there's no comparison to higher-end GPUs (not yet)

4) The main hitch: overclocking
Whatever rumours you have been hearing are partly true. Without going into the details, this is what should bother us: Intel gives us less headroom at the budget-mainstream end of processors. Meaning you won't be able to overclock a budget Sandy Bridge as well as say a budget clarkdale, say i3 530/540. The enthusiasts should have no problem, whatever numbers they achieve in their current Nehalems, Gulftowns, same should be achievable by equivalent sandy bridges, coursey unlocked multipliers.

Will keep posting should something more come up.

 

4745454b

Titan
Moderator
True for many of us, not all.

I was really surprised by the GPU part of the review. I'm not sure if they finally learned how to do this from the larabee project or if its from moving it on die or the 32nm process or ??? But its nice to see their onboard not as the worst thing out there. Give them some time, higher end parts in the future perhaps?
 

ksampanna

Distinguished
Apr 11, 2010
1,284
0
19,360
There is no need to overclock.Base frequency of most sandybridge is above 3.2ghz.which is sufficient for any game
Well since when did anybody start overclocking for sheer need? We all do it because we can :sol:

Personally, I feel the restricted overclocking is the only grey area in an otherwise bright potential. Since OCing also pushes the board manufacturers to pack newer, smarter tech into their products, resulting in better builds, higher quality components & other bells & whistles.

But its nice to see their onboard not as the worst thing out there. Give them some time, higher end parts in the future perhaps?
I guess that would be the worst nightmare for Nvidia (not ATI, since it does have Fusion to counter it)


 

4745454b

Titan
Moderator
Taking Fusion to its extreme, Nvidia is all but done for. The GPU will be folded into the die of the CPU, and used for all FP calculations. Bulldozer is perfect for this as AMD is doubling up on the Int cores. Intel is currently ahead on doing this, but both are far form where they need to be.

The thing is, once this id is done, whats left for Nvidia? No chipset business, no need for lower end cards, and even cuda might not be enough once people start using their GPU/CPU combos to convert things. The only market they would have left is mid to high end cards. A market that gets horribly cramped if Intel enters the race.
 
Actually there's already another thread here on the Anandtech SB preview.

Keep in mind the "10%" IPC improvements over Nehalems are with an engineering sample with early BIOS and drivers, not a shipping product. Also, more importantly, Anand said that Turbo was not working on the sample, and if it was, he expected between 13% and 17% IPC improvement over Nehalem. I'd guess 20% improvement once the chip ships. And the on-die GPU was the 6 execution unit version, not the 12 EU that will also be available.

Finally, there are unlocked multiplier versions available as well, for those who want to OC.

Overall, this is much more impressive than the now-suspect SB previews that were floating around on Xtreme Systems a month or two ago.
 
^Wait a second.....

That SB only had half the EUs of the top end SB?

That could potentially put it almost all the way over the top of the HD5450 and mke it start nipping the heels of a HD5500+....

Thats not bad at all.
 
There is no need to overclock.Base frequency of most sandybridge is above 3.2ghz.which is sufficient for any game.
<sarcasm>Oh yeah that's right. Because everyone plays games and gamers only play games. Yep. That's all computers are good for.</sarcasm>

Do u guys really want nvidia to go out of buisness?
I was about to say that would make ATI (AMD) a monopoly in the dedicated graphics card market and I believe there are laws against it but then I remembered this:
http://www.s3graphics.com/en/products/class2.aspx?seriesId=4
 

MarkG

Distinguished
Oct 13, 2004
841
0
19,010


You're not going to be sticking a 300W GPU into a 100W CPU any time soon. Nor does it make sense, since gamers generally replace GPUs far more often that CPUs, so why would they want to be forced to replace both at the same time?

But I agree, if Intel do manage to destroy the market for low-end GPUs then it's likely to hit Nvidia hard.
 
This has been known for quite awhile now, and I actually thought itd come sooner, at least this level would have already been here.
nVidia knows this as well, has known it for some time, and is heading in thier only direction left them.
Or, notice the lack of urgency of the low end Fermis
 

Ryuzaki

Distinguished
Aug 14, 2010
30
0
18,530
You mentioned that the integrated GPU on the Sandy Bridge will be the equivalent of an entry level video card and can't compare with high performance discrete GPU's yet.

Will there be options to have a Sandy Bridge with its internal GPU and also have a high end discrete GPU? And how much would performance increase on the notebook because of both the discrete and integrated graphics components?
 


I'm sure that will be an option. There are already some notebooks (most notably those with Nvidia Optimus) that use the Intel integrated graphics for 2d and low power 3d applications to save battery, and then turn on the high performance discrete GPU for games. I doubt that they will do any sort of GPU combinations (like SLI or CF) combining the Intel with the discrete though.
 

+1 or notice how AMD seems to be pushing the Tesla cards for HPC and the likes. I'm pretty sure the profit margins in the HPC are good enough for those that specialize in that to not run out of business, for example look at SGI,etc.
 


They even have them in some Netbooks but unless the discrete GPU they put in is better than entry level, Intels new IGP might just kill that out.

I could see it in a Alienware laptop that has a GTX480M or HD5870M. Use Intels IGP for low end gaming to save power when on battery life and the discrete GPU for high end and when plugged in.



Hah. Its probably very fast for what it does. But 5.2GHz? Maybe fastest clocked stock ship. AMD had a 7GHz quad and Intel had a 8GHz single.
 

That's what mine does (I have a brand new M11x with an i7 UM) - it uses the GT335M for gaming, and the intel on-chip (MCM) IGP for windows usage. It does a darn good job too - I get 5-7 hours of battery life on a computer that is small, light, and gaming-capable (although I can only game for 1.5-2 hours on battery).
 

4745454b

Titan
Moderator
You're not going to be sticking a 300W GPU into a 100W CPU any time soon. Nor does it make sense, since gamers generally replace GPUs far more often that CPUs, so why would they want to be forced to replace both at the same time?

Not what I said, or what I meant. By bringing the GPU into the die of the CPU, you don't have to have a FP processor in the chip any more. You'll just use the shaders on the GPU to run that type of math. (like CUDA) And if you can get "good enough" graphics performance with this, then Nvidia is in serious trouble due to the lack of x86 license. CUDA is all thats left for them, and seriously high end GPUs. At that point they because an afterthought, like S3.

I don't think AMD will be in legal problems if this happens. First, they had nothing to do with Nvidia's exit of the market. Second, Intel would still be the biggest provider of graphics chips, just like they are now. No legal problems there. We are still a ways away from this happening, but there is starting to be enough writing on the walls that I'm not sure this can be avoided for Nvidia. At least not with their current relationship with Intel. I don't want one less player in the market, I'll be sorry if this actually does happen.

I'll have to go back and look at that review. If that was the lower end GPU from Intel then HOLY $HI7! There would be no reason to buy a low end card anymore. (as long as it can do HTPC duty that is. Does it support Bit streaming?)
 


Not only that, but Anand also thought maybe turbo was disabled on the on-die GPU as well as the CPU.

Still, I'd be more interested in a 6 or 8 core (or 10 core, since Intel mentioned that possibility :p) SB without any GPU, for desktop anyway, as long as Intel didn't charge arm+leg+torso for it :D.

However if the on-die GPU plays nicely with a discrete card or cards (such as the latter being turned off when not needed, but able to power on instantaneously when needed so that it is completely transparent to the user), then that would be a nice touch...
 


IIRC Lucid had a "hydra" chip that could get an AMD and nVidia GPU to work together, in a sort of hybrid SLI/Xfire combination. But it sat on the mobo between the PCIe lanes and the GPU, and thus wouldn't work with an on-die GPU. Not unless you had really teensy-weensy hands and a itsy-bitsy soldering iron :p.
 
I'll call the guy from Burger King and see if he owns a soldering iron The guy on the left
bkguy.jpg
 

ares1214

Splendid
After looking at the anandtech review, im more impressed than i thought id be. The on board graphics are better than i thought they would be, however i dont really care much about that, but the arch improvements are far greater than i had anticipated. Intel still needs a better form of core multiplication/ hyper threading. This is even more impressive considering that the top end, likely priced where the i7 is now, is 300 MHz higher than the one tested, with having functioning turbo boost, and likely a few improvements, for likely another 10-20% overall performance increase in single threaded applications. I do hope the release some without IGP, and maybe faster clock rate/lower energy consumption. Or one with maybe 5770+ IGP performance. This is looking very promising if it is in a $100-300 range which it very well might and should be. Only 2 things concern me. Chipset, and overclocking. Whoever said that there is no need to overclock should be shot! :lol: Might be a bit harsh, but it extends the life of the cpu as far as decent performance, it speeds things BESIDES gaming up, although not like ANYBODY does anything else besides gaming :pfff: And if neither of those, sheer bragging rights. This might be all well and good, but if Intel doesnt price the K series right, I might have to skip this and do BD. While a 3.1 GHz SB beats a 2.8 GHz Lynnfield, it might beat a 3 GHz Bulldozer. But what if that bulldozer OC's to 4.3 GHz? I doubt it will fair as well against that. Thats disappointing to me, although it really is a very smart move by intel, i just hope AMD doesnt do the same. The P55 chipset was pretty weak, while toms proved that x8, and even x4 didnt cost much performance, in the future it will. If the 6870 is 15% faster than the 5870, then x8 is costing 20% performance. Also the lack of native support for USB 3.0 is questionable, however the motherboards will just add it later. All in all, very impressive, but if anything brings it down, it will definitely be the lack of overclocking on the normal priced non-"k" series, and possibly weak chipset/mobos. Maybe even sheer strength pf Bulldozer, but AMD has definitely got their work cut out for them, especially since Intel is fully invading their sub $200 market. Although i do wonder, how long will intel keep 1155? By christmas are we going to have 1154? Knowing Intel, i wouldnt be surprised :lol:
 
^For the discrete GPUs, I would hope that the P6 series of chipsets would have PCIe 3.0. If so, it would mean a x8 link would be as fast as or even faster than a PCIe 2.0 x16 which means it would probably take up to a HD7k/GTX600 series to show any performance drain for the bus.

I for one don't think Intel is worried about Bulldozer since GF has been having touble with 32nm and by the time its out in 2011, Intel will be ramping Ivy Bridge on the 22nm process that will possibly also utilize HK/MG gen 3.

As for Bulldozer, I am waiting to see it in the wild. Due to the major difference from a K10.5 based CPU, I want to see what they can do in terms of overclocking or if the new features they have on it might also prevent it.

The biggest reason why SB would have problems OCing like the normal is due to the most of the northbridge being inegrated on the CPU itself. If Bulldozer follows suit and has the clock generator integrated onto the CPU, it might also put the same limitation into OCing.
 

4745454b

Titan
Moderator
The biggest reason why SB would have problems OCing like the normal is due to the most of the northbridge being inegrated on the CPU itself. If Bulldozer follows suit and has the clock generator integrated onto the CPU, it might also put the same limitation into OCing.

The problem isn't that the clock generator is on the CPU, its been that way for a long time. The problem is Intel decided to use ONE clock generator for all things that need a clock. This means you change the clock for one thing (CPU) and ALL clocks change. RAM, PCIe, PCI, all of them. And because there is only the one clock, there is no way to lock the others down. Had they put in separate clocks the other devices, they wouldn't have this problem now.
 

Yup. That's what I thought so, but exactly WHY did Intel do so and does it really offer advantages at stock setting? Also, is it even possible to use a different clock (as in external clock on the motherboard) to drive USB,PCIe,etc?