Official Intel Ivy Bridge Discussion

Page 22 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
I squashed that on a few GPU threads as well:
http://www.hardocp.com/article/2012/07/18/pci_express_20_vs_30_gpu_gaming_performance_review/14

The Bottom Line

We have put forth a great effort to get to the bottom of the PCIe 2.0 versus PCIe 3.0 debate. We put a lot of time into testing performance and verifying that our data is accurate. Except for a couple of specific scenarios, most of the performance advantage had under PCIe 3.0 was well under 10%. This actually falls in-line with the kind of performance advantages one might expect using n Ivy Bridge CPU clock-for-clock compared to a Sandy Bridge CPU. The IPC can affect performance by as much as 4-7% in favor of Ivy Bridge easily. As you noticed, most of our data when we experienced an improvement on the Ivy Bridge system was in this range of improvements. There were a few specific cases of 11% in The Witcher 2 in one test, and 19% in Batman (for part of the game only) and 14% when we cranked up the settings to unplayable levels in Max Payne 3. For the most part, at the real-world highest playable settings we found playable, all performance advantages were under 10%.

With real-world gameplay performance advantages under 10% it doesn't change the actual gameplay experience. It in no way allows us to improve in-game quality settings nor does it give us any advantages over the PCIe 2.0 system. As we've stated previously in this evaluation, the technical performance advantages are "benchmarkable" but not relating to the gameplay experience.

It is also very clear from our testing that the NVIDIA GeForce GTX 680 receives an overall higher percentage of improvements with Ivy Bridge than the Radeon HD 7970 does. It is possible that similar to our past CPU frequency testing, NVIDIA GeForce GTX 680 GPUs are simply more sensitive to CPU clock speed and IPC, especially when you scale these upwards. We've done testing in the past that also shows NVIDIA GPUs are more sensitive to CPU clock speed than AMD GPUs are as you scale those up to dual and triple-GPUs. Therefore, we are not shocked to find that one brand might benefit with a technology more than another. It is an interesting result that we didn't expect when we started testing.

So do not fret if you are on a Sandy Bridge PCI Express 2.0 system, you aren't missing out on a bunch of performance compared to an Ivy Bridge PCI Express 3.0 system. Most of our readers will likely benefit from higher CPU overclocks on Sandy Bridge anyway if you are truly pushing the CPU clock and this alone will likely negate any "advantages" from PCIe 3.0 or Ivy Bridge IPC when it comes to real-world gaming scenarios. PCIe 3.0 is a great evolution, one day it may actually support a better gameplay experience compared to PCIe 2.0, but that day is not today.

How did you "squash" anything? Every performance increase can be traced back to IPC increases between SB and IB CPU's. Only in Arkham City with PhysX on high showed results indicating a performance benefit to PCI-E 3.0, which makes sense given the increased load on the GPU.

Maybe Dual-GPU packages would be more sensitive to the extra bandwidth, but a single GPU shows minimal, if any, performance benefit at present. The real advantage of PCI 3 at this stage is the ability to drive a GPU on a PCI-E 3 X8 slot, where PCI-E 2 X8 shows signs of bandwidth starvation in some setups. [I'm actually interested to see just how stressed PCI-E 2.0 X8 is actually...]
 
I don't think it makes sense to just look at current performance when comparing PCIe-2.0 to PCIe-3.0. If I buy a CPU today, I'll be using it for at least 3 years. So in 2½ years' time, will there still be no difference? Even with my shiny new GTX 770 or Radeon HD 8950?

I suppose the counter-argument is that a Core i3 would bottleneck those cards anyway, but... CPU bottlenecking just doesn't seem to be as big of a problem as it used to. People are still happily using Core 2's to this day.
 


And we return back to my argument about low resolution testing to determine how much CPU's have to spare in comparision to eachother in non-GPU bottlenecked situations.

But for a single GPU, I really can't see a modern Intel CPU bottlenecking anytime soon.

I suppose the counter-argument is that a Core i3 would bottleneck those cards anyway, but... CPU bottlenecking just doesn't seem to be as big of a problem as it used to. People are still happily using Core 2's to this day.

8000/9000 series C2Q's are routhly equivalent to a Phenom II at the same clockspeed. So getting on, and likely a GPU bottleneck when using the top GPU's on the market.

Core i3's do bottleneck in MP gaming, where the CPU matters more then the GPU. You don't notice it in SP simply because the GPU matters so much more, but I'd be hesitant to build a gaming rig around an i3 at this point.
 
Seems a bit weird that their enthusiast CPUs will be more than a generation behind their mainstream consumer CPUs.


I was actually thinking more about PCIe bottlenecking than actual CPU bottlenecking.
 
I think the C2Q 9xxx's are actually better than Phenom II at the same clock speed but not by much but they are..

Kinda proves LGA775 still has some life yet. I still view 8000/9000 series C2Q's as cheap upgrade options for users still on LGA775.

I'd honestly love a few benchies to see how a Q9550 holds up these days...
 


Seems to beat the Ph2 920 (same clock speed) by around 10% on average. http://www.anandtech.com/bench/Product/50?vs=81

Loses to a BD 8150 in most benchies however. http://www.anandtech.com/bench/Product/50?vs=434

Gets schooled & whipped like a dead mule in all the benches compared to my 3770K however 😛. http://www.anandtech.com/bench/Product/50?vs=551
 



oh no, you didn't.
:lol:

(edit: for the record i am with you - people over look that besides what overclock is achieved with a K series processor the i3-2120 is the best bang for the buck cpu on the market)
 


Unless you wanna simultaneously rip & burn a couple BDs or DVDs at the same time as playing Borderlands on max settings/eye candy 😛.. 8 threads comes in handy *sometimes* 😀.

No stutter, no flutter, no mutter(ing under my breath). :sol:
 
Hi guys, i new here and hope your help.I want to Know if the version I7 3770 "NON K",have problems of temps, i wish that cpu but i not sure about the temperature, i plain to play frecuently very high all , use sony vegas and vmware workstation.And if i can use easily the fan stock intel.No OC, just play hard :)........ please answer me.
Thanks friends.
 

There will be no temp problems for your use at all.
 


# Threads != # Cores Used
 
^ Of course. Dunno how my post implied that 😛

However benchmarks show the more resources available (i.e., SMT or CMT thread capability), the better a CPU can multitask. Check the difference in the multithreaded benches: http://www.anandtech.com/bench/Product/288?vs=287

Even allowing for the extra cache and 100MHz higher base clock, the 4core/8thread 2600K often beats the 4core/4thread 2500K by a significant amount - over 40% in the case of POV-Ray.
 
http://www.ebay.com/itm/Intel-Ivy-Bridge-Core-i3-3240-3-4GHz-3MB-L3-LGA-1155-22nm-Dual-Core-55W-TDP-/261079396136?pt=CPUs&hash=item3cc98b6f28#ht_1210wt_906
Earlier rumors said $138 for that model. I hope this means they decided not to have such a big price gap between the 3240 and 3220 (~$20 for 100 MHz just isn't a good deal).
 
Item Specification:
Dominate your gaming competition with the unlocked, unleashed and uncompromised Intel Core i3. Thanks to the latest ultra-threaded, killer combination of smart features, including Intel Turbo Boost Technology and Intel Hyper-Threading Technology, the Intel Core i3 is Intel’s most capable and downright devastating PC processor ever.

😱


nice accurate description
:lol:
 


That's an engineering sample with unlocked multiplier. Get your OC on.