Ivy Bridge i7 vs Broadwell i7

Icaraeus

Honorable
I was thinking of investing into an i7 3770K CPU early next year, but Broadwell seems interesting and it also seems to have good gains over Ivy Bridge. Which one should I go with, or should I wait a little more for Skylake? I'm on my first build and I've had it for around 5 months.
 
Solution
over what you have now you would be wasting your money. the gains are slight at best. overall switching out the 3570 for a 3770k is pointless as your looking at 10-15% gains in cpu perfomance in productivity deppending on the overclock with pretty much 0% gains if you left it at stock. also pretty much zero performance gains in gaming.
the only gains you would possibly get from broadwell is about 10% over what you currently have but thats because the broadwell is clocked about 10% faster, so for the money again pointless.
over what you have now you would be wasting your money. the gains are slight at best. overall switching out the 3570 for a 3770k is pointless as your looking at 10-15% gains in cpu perfomance in productivity deppending on the overclock with pretty much 0% gains if you left it at stock. also pretty much zero performance gains in gaming.
the only gains you would possibly get from broadwell is about 10% over what you currently have but thats because the broadwell is clocked about 10% faster, so for the money again pointless.
 
Solution
Older thread but I just saw it and thought I'd COPY THIS OVER in case it benefits anyone:

1) First, we always need to distinguish between GAMES and VIDEO EDITING (or Video Conversion).

2) Very few games benefit much beyond a modern 4-core INTEL CPU like the i5-3570 let alone the i5-3570K at 4.5GHz.

3) Video conversion, if using 100% of your CPU (Handbrake with some but not all settings), can benefit up to 30% more with Hyper-Threading at times but for the entire process it's lower perhaps about 15% as mentioned above.

(Best quality is completely software thus the CPU, though hardware encoding for "good enough" conversions is getting quick.)

4) *Graphics cards will increasingly do processing tasks normally done on the CPU. VIDEO Conversion/Editing is up to 100% done on the CPU. The graphics card using programs based on OPENCL for example will be far, far faster with a good graphics card. I'm not sure how long, Adobe has been slow and only added some benefits.

Simpler programs like Handbrake might in the near future see 20x or higher benefits such as 3 minutes versus an hour for the same quality. At this point, the video card might do almost all the processing with the CPU delegated to just getting the data to the video card (though as you can see with APU's the CPU and GPU as separate elements will eventually disappear).

5) Haswell (i5-4xxx) is about 10% better than Ivy Bridge (i5-3xxx) at the same frequency for most applications assuming no bottlenecks elsewhere (which is likely for gaming, thus there may be no benefit).

6) The Haswell DC refresh (Devil's Canyon) CPU's were a solution to a local overheating problem thus allowing higher stable overclocks (again only benefit for scenarios that were already CPU bound). In particular, an i7-4790K at 4.7GHz vs an i5-3570K at 4.5GHz is about 50% faster in some non-gaming scenarios (1.3 x 1.1 x 4.7/4.5); the numbers account for HT vs non, the Haswell benefit, plus the slightly higher overclock that DC version allows.

*There was much confusion about this performance difference to the point many got Ivy over Haswell. Yes, Ivy could overclock higher (before the DC refresh) but on average not enough to compensate for the 10% architectural performance lead. So from a performance viewpoint Haswell was still the better choice if buying new (and your program could benefit). Those wanting "the best" without completely breaking the bank today should get the i7-4790K.

For NON-OVERCLOCKERS the 10% difference basically means that a 3.0GHz Haswell is about the same as a 3.3GHz Ivy Bridge for raw performance under 100% usage scenario of all threads.

7) BROADWELL is mainly a die shrink of Haswell, and compared to the Devil's Canyon refresh (i.e. i7-4790K) there may be zero benefit. A die shrink doesn't always improve overclocking. I suspect it would be basically just improved efficiency in idle performance due to a focus on increasing MOBILE POWER design.

8) DDR4 vs DDR3:
Faster memory is only needed if the CPU is fast enough to benefit. We are getting it soonish, but I'm not sure how fast a CPU will need to be to actually benefit (versus the fastest DDR3). We will have power savings, but if investing in a DDR4 system adds much price then I don't recommend it (i.e. i7-4790K + DDR3, versus i7-5770K + DDR4).

Again, I don't think there are any CPU's that require more than what fast DDR3 offers (tends to drop off quickly after 1600MHz dual-channel for demanding tasks and usually less for gaming).

9) Mult-threading:
Most of my games use less than 40% of my i7-3770K's total processing power though a few are demanding (usually online with some inefficient coding).

In a Hyper-Threaded modern Intel CPU with eight threads (four cores) every 2nd thread you see represents the Hyperthread which shoves code into a core but can at best add 30% more processing power. Windows incorrectly averages all threads together. Thus, if I was using 100% of my four cores with absolutely NONE of the Hyperthreading utilized Windows would report 50% usage.

The above fact gets even worse if a game can only use TWO CORES and no hyperthreads. Ignoring Windows processing tasks, if an i7 CPU used only two physical cores that would be reported as 25% usage when in fact it's 100% usage of what the game coding actually allows it to use. So you can frequently be CPU bottlenecked when your CPU isn't well utilized (People know this, just trying to explain a bit better). A game not being well threaded (limited to number of cores/threads it can use) again also explains why modern AMD CPU's don't perform nearly as well as modern Intel CPU's.

10) FUTURE:
The piece de resistance is that a modern Intel 4-core CPU might be great for many years to come for the following reasons:

a) DX12/Mantle games will be better multi-threaded and some will use all or most of your threads (i.e. all four cores or all eight threads for HT CPU's).

b) DX12/Mantle uses less CPU processing power to accomplish the same task. Something discussed a lot with Mantle.

c) Some tasks normally done on a CPU will shift to the GPU (i.e. Havok physics, AI etc.).

d) Gaming Console architecture has a strong affect on PC game design, especially now that they use an x86 CPU architecture so games should "port" efficiently or even be designed for PC first.

11) PS4 vs Intel Haswell Gaming PC:
Ignoring optimizations on console for now...

Evidence suggest a minimum of 30% per core difference in gaming of modern AMD vs Intel Ivy Bridge. I'm heard different numbers even for PS4 up to 2GHz (all cores at same time?) so I'll just use 1.8GHz.

Thus, for PS4 (six cores usable per game) compare to an i5-3570K at 4.5GHz it goes like this:
AMD-> (6/4)*(1.8/4.5)*(0.7) = 0.42

By my very, very rough calculations the above i5-3570K at 4.5GHz is about 2.5X more powerful than the PS4's processing power. I'm ignoring Windows usage as well (though in this case could be 5% or less) for non-gaming tasks.

*Yes, gaming consoles and PC's aren't completely comparable with the console always likely to be more efficient (not necessarily better), but the huge difference in processing power and other points I've made indicate that upgrading the CPU may not be that important for many years. We'll see, but there's a fairly good chance that we're good with just a graphics card in a few years to upgrade a solid Intel build.

(More importantly to current AMD USERS is that their CPU's will continue to improve so if you built with an FX-6300 or better then congratulations your future is looking brighter. This will however NOT affect any currently finished game, nor do I currently recommend building with AMD CPU's, but again if what you currently have is a 6 or 8-core FX CPU things are looking up soonish, and for a handful of games the average performance matches the Intel versions (not necessarily the MINIMUM though good coding should solve that as well).)

Graphics Cards will definitely benefit gamers much more though even those will be affected by Console architecture to a degree.

One final note is that I do think we'll see a big increase in DDR3 memory requirements now that consoles are here; again don't compare directly as they have a SHARED SYSTEM setup and again tend to be more efficient at usage of existing specs (Vanilla Skyrim on PC used about 6X the System+Video memory as the XBOX 360 version. Sure, the PC version at max Vanilla specs was better but it wasn't remotely possible to get the same quality settings with 500MB combined...) . I think 8GB will quickly become the standard with 16GB becoming finally beneficial though i wouldn't want to hazard timelines. Likely not until DX12 and perhaps not right away. It's too hard to predict though so I wouldn't run out and invest in that but it's something to consider if building new.

Update:
I'd really, really love go see G-Sync in HDTV's. Ironically, the largest benefit would be to the Microsoft XBox One (it's only DRIVER dependent on the graphics side so it's theoretically feasible). Sony really should approach NVidia though at this point I doubt NVidia would bite considering the AMD APU in the PS4. The lower the frame rate, the larger the benefit seen with G-Sync.

Sony G-SYNC HDTV + PS4, or better yet a STEAMBOX'd super rig with minimal noise connected to a 3D, 4K, G-Sync OLED Sony HDTV with those awesome new speakers some models have. Not to mention VR headsets in a few more years.

Either way, the future's looking up for gaming...