Core i7-4770K: Haswell's Performance, Previewed

Page 6 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]JonnyDough[/nom]Ever hear of Lucid? You can disable the graphics.[/citation]
What does Lucid have to do with disabling the IGP? That's usually a BIOS setting, assuming that you're using a board that doesn't automatically disable it.
 
[citation][nom]Onus[/nom]I just looked over the CPU hierarchy chart again. I would suggest everyone do so, just for a little perspective. Over the years, a given CPU will "find itself" in lower and lower tiers. Game review articles all show yesteryear's CPUs beginning to struggle in those titles that are CPU-bottlenecked. Many games are not though, and even where they are, it's now usually due to lack of cores as we're finally seeing games (and applications) better optimized for more cores / more threads.I understand those who always want to be on the leading edge, even if only for bragging rights, or personal pride; you're going to upgrade whether you "need" it or not. If your goal is to play games though, there is nothing about Haswell that is the least bit compelling if you already have a CPU in the top three or even four tiers.[/citation]

That can depend on the CPU and situation. Some of them are going up instead of down as games get more well-threaded. For example, relative to Core 2 Duo, Core 2 Quad is much higher than it used to be for each CPU. The same has been starting to come true with AMD's six and eight core CPUs (also Intel's i7s versus the i5s and the i5s versus the i3s) compared to their lower core count CPUs ever since we started getting some proper DX11 titles such as BF3 MP and more recently, Far Cry 3 and Crysis 3.
 

animeman59

Distinguished
Dec 30, 2009
69
0
18,630
If these initial benchmarks hold true when the CPUs are released, then there's really no reason for me to upgrade from my i5-2500K, unless I want to spend the extra money on a new chipset along with a new processor (which I don't). Same thing with the new AMD Steamroller processors. I'll need to see significant gains in performance from either company in order for me to upgrade to a new architecture.

Looks like my next big upgrade will be a new GPU from Nvidia or AMD.
 

jesot

Distinguished
Dec 19, 2008
260
0
18,790
I read awhile back that Haswell supports PCIe 2.0 instead of 3.0. Is that still true. If so, what are the ramifications for gaming?
 


That wouldn't make any sense to me since it'd be a step back in PCIe compatibility and the article states PCIe 3.0 compatibility anyway.
 

InvalidError

Titan
Moderator
[citation][nom]ojas[/nom]Excellent points by InvalidError. Though i must say, 10 to 15% improvements each year over the last 5 have resulted in a huge increase over the Core 2 generation, and arguably over the first Core ix generation as well.[/citation]
The ~10% we have today is nowhere near the 40-60%/year we had before hitting the 3.xGHz brick wall and the difference between Core2Duo 8xxx and i3-3xxx (dual vs dual+HT) is only ~50% clock-for-clock on average (and less than 20% in many games) so not all that much IMO.

Personally, I do not bother considering upgrades smaller than 100% so even if Intel/AMD could manage to "floor" it at 15%/year (though Intel's average between SB and IB is closer to 10%/year), it would still take them 6+ years to reach my minimum upgrade threshold so unless something major comes along (or my i5-3470 dies), I probably won't be upgrading my PC until ~2020.
 


With many modern games, the difference between current i3s and the old Core 2 Duo CPUs clock for clock can be as high as almost 100% for the more CPU-limited games that also scale well across four threads, thanks to Intel's Hyper-Threading. The difference would only be something as minor as less than 20% in situations that are very light on the CPU. I'd say that there's usually a very noticeable difference between a Core 2 Duo around 3GHz and an i3 at a fairly similar frequency.

A better argument around what ojas said would be something like pointing out the Nehalem and also Sandy Bridge (to a somewhat lesser extent) made almost all of that performance jump, not Ivy nor even the upcoming Haswell if we include it and it turns out along the lines of this preview IMO.

Also, it's only fair to mention that there is no "brick wall" around 3.xGHz. Intel simply chooses not to go beyond despite being more than capable. If Intel wanted to, they could easily launch in the mid 4GHz range and they could have done this with Sandy Bridge and Ivy Bridge (IDK about Haswell yet). That would have hurt the overclocking edge since they'd basically start out near the limits of what the CPUs were capable of, but that it could have been done is not negated by that. I'd even bet that Ivy could have been in the mid 5GHz range with proper thermal material such as good flux-less solder between the IHS and the CPU die.

Sure, power consumption would have not gone down much if at all if Intel did this and that would have hurt the point behind much of what they're doing lately, but Intel was capable of it nonetheless.
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780
Hey mayankleoboy1, I run Gentoo on my FX 8350 and I generally see a 10% speed up at least on everything. Blender is the exception as it is more than twice as fast in Gentoo than the official version from Blender.org.

I actually beat a 3930k @ 4ghz with my FX 8350 at 5ghz by about 30% when we both rendered a demo file in Cycles, with me on FX 8350 in Gentoo optimized and him on Windows.

So yes, I expect Intel to downplay the importance of AVX, FMA, etc.
 

InvalidError

Titan
Moderator
[citation][nom]blazorthon[/nom]Also, it's only fair to mention that there is no "brick wall" around 3.xGHz. Intel simply chooses not to go beyond despite being more than capable. If Intel wanted to, they could easily launch in the mid 4GHz range and they could have done this with Sandy Bridge and Ivy Bridge (IDK about Haswell yet).[/citation]
That is what Intel thought with Prescott vs Northwood but Prescott ended up maxing out at pretty much the same point Northwood did despite all the effort stretching the pipeline to reach higher clock speeds, readily crushing Intel's ambitions to push Netburst to 8-10GHz.

A lot of the chip's maximum clocking capabilities are dictated by the longest combinational logic + propagation path between two DFFs within each given clock domain (the 'critical path') and minimizing clock power requires squeezing as much logic as possible between DFFs to minimize the number of DFFs and amount of duplicated logic. In essence, Intel's work at optimizing power efficiency plays directly against hopes of higher maximum overclocks.

I would not be surprised if Haswell was actually a worse overclocker than Ivy Bridge since it has much more complex out-of-order scheduling and more issue ports which come with more complex reservation, issue and retirement arbitration logic. Haswell has almost the same overall pipeline as IB/SB had aside from fattened structures so this means all the extra logic got stuffed between a very similar number of DFF stages and that some paths have likely become longer. More logic between DFFs = longer critical path = lower maximum clock. This likely ate a good chunk of the overclocking potential people expected from IB.

There is also the question of how tightly locked and how ticklish Intel's on-package VRM might be.
 


We already know that what I said is right because it is proven by overclockers. I've never even heard of say an i5-2500K that couldn't break 4GHz and when we get rid of Ivy Bridge's crap paste, it's even better at overclocking than Sandy is.

Like I said, it's too early to know for sure about Haswell (maybe they break this trend, possibly for reasons that you mentioned), but we know for a fact that Sandy and Ivy Bridge have no brick wall at 3.xGHz.
 

tomfreak

Distinguished
May 18, 2011
1,334
0
19,280
[citation][nom]blazorthon[/nom]We already know that what I said is right because it is proven by overclockers. I've never even heard of say an i5-2500K that couldn't break 4GHz and when we get rid of Ivy Bridge's crap paste, it's even better at overclocking than Sandy is.Like I said, it's too early to know for sure about Haswell (maybe they break this trend, possibly for reasons that you mentioned), but we know for a fact that Sandy and Ivy Bridge have no brick wall at 3.xGHz.[/citation]u have to take into account the TDP. 4GHz+ Sandy are not going to fit inside 95w TDP. The problem we have here is not hitting the high clock rates, the problem is the powerbrick wall we are hitting. Prescott can hit much higher clock rates than northwoods, but at the huge expense of power efficiency. Haswell is going to overclock pretty well, but at high power consumption cost = same as Sandy/Ivy bridge.
 

bassbeast

Distinguished
Dec 14, 2010
74
0
18,640
Frankly this shows what I have been saying for a couple of years now, that CPUs passed "good enough" on BOTH sides of the aisle and have gotten to "crazy overpowered' for the vast majority. My boys and I have been gaming with AMD Hexacores for a couple of years now and even on them we have so many cycles left over it isn't funny so guys on 1155 users are like Darth Helmet and hitting the ludicrous speed button, so unless you are one of the handful that need every cycle they can get? No need to buy a new PC if you are already on a triple or better.

I mean have you seen the specs for the new consoles? Even a new gen of consoles frankly isn't gonna force more than a few dual core users to upgrade, everybody else can just sit this one out.
 

InvalidError

Titan
Moderator
[citation][nom]blazorthon[/nom]but we know for a fact that Sandy and Ivy Bridge have no brick wall at 3.xGHz.[/citation]
For practical yields at practical TDPs which is what something like 99% of the market is all about, they do... Intel's design and process is tuned to churn out a steady supply of chips that can reliably clock in the 3.5-3.9GHz range.

As far as overclocking records go, there is no point looking at those when you need to disable just about everything that makes the CPU worthwhile to achieve the overclock at which point your overclock record may perform worse than stock with everything enabled. Without disabling cores and features to cut power during overclock, IB does not fare any better than SB does even with the lid taken off (it runs cooler but not much faster) and its power draw scales up much more steeply than SB's does after ~4.2GHz.

Sure, some, many, possibly even most of Intel's chips might be able to initially run at 4GHz but few would manage to do so under worst-case Intel warranty conditions for the full 3-years warranty, which is what Intel needs to bin for... it does not matter that a chip can run at 4.5GHz off the wafer if it can only do so reliably for 6 months out of the packaging plant using the stock HSF on a 3-years warranty.
 


I addressed that in my earlier post. My point was that Intel is capable of it.

Also, around 4GHz doesn't impact power consumption much at all for Sandy and Ivy Bridge. It's not until a little over that point where power consumption starts to take a shot upwards and even then, it's not unmanageable until quite a bit higher.

AMD sells CPUs that use much more power with no issues involved in the higher power consumption and even Intel sells and has sold CPUs of even greater power. I don't think any consumer CPU was more power hungry than some of Intel's Core 2 Quad/Extreme models, at least at stock.

Even around 5GHz, Ivy still doesn't use too much power compared to some stock AMD CPUs and some old stock Intel CPUs which generally really aren't that bad about power consumption either compared to upper mid-ranged to high end graphics cards, so no big deal at all there.

Tom's and others have proven that at least for short periods of intense work, having high frequency ceilings can improve power efficiency by getting a job down faster and reaching idle power consumption or at least near idle power consumption sooner than it would without such high frequencies.

Besides, it's not like Intel would have huge frequencies on all of their SKUs, just some in this example.
 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810
[citation][nom]agnickolov[/nom]The Visual Studio benchmark settles it. My next workstation will use Haswell.[/citation]

only if you are compiling humongous and well threaded projects like Chromium everyday.
Toms benchmark of compiling chromium is an lone situation, where the compile process is very well threaded. Most big projects arent so good at using more than 1 core.
You would get more speedups if you add another 8GB of RAM, and a 256GB SSD.
 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810
[citation][nom]blackkstar[/nom]Hey mayankleoboy1, I run Gentoo on my FX 8350 and I generally see a 10% speed up at least on everything. Blender is the exception as it is more than twice as fast in Gentoo than the official version from Blender.org. I actually beat a 3930k @ 4ghz with my FX 8350 at 5ghz by about 30% when we both rendered a demo file in Cycles, with me on FX 8350 in Gentoo optimized and him on Windows. [/citation]

Are you using -march=native -O3 switches ? Does Blender natively uses AVX optimised code ?
If you compare selectively use optimisation like enabling only AVX, or using only FMA3, or disabling both AVX and FMA3 (and using only SSE4) , you will find that the software is still 9.9% faster.



So yes, I expect Intel to downplay the importance of AVX, FMA, etc.
Not sure how you came to this conclusion.
If you are basically talking about using compiler optimisations for pre-existing code, then its only a matter of providing more intelligent code to GCC. Since Intel has a much larger FOSS team compared to AMD, GCC 4.8 will potentially work much better for Haswell processors.
If you are talking about native use of these extensions, then thinking from a developers perspective, they would happier to add a feature that the major CPU vendor supports in some of its processors. (as compared to when only AMD had support of BMi, and FMA3).
 

devBunny

Distinguished
Jan 22, 2012
181
0
18,690
@Chris: Thanks for the review. I, like many here, have been waiting keenly for the first glimpse. :)

When it comes to reviewing the production chip, could you use a 3930 instead of a 3970, please?

The 3930 is the 6-core workhorse, best bang for the buck, so people like myself (number crunching, little or no gaming) are interested in answering the question of whether to buy another 3930 for the stable or a Haswell (or to wait for Ivy Bridge-E, but that's a different question, and still one for Intel). The 3970 is irrelevant to the budget-conscious data miner due to its price and so comparing with one doesn't help answer the question.

The other part of the question is whether, even if the Haswell is slightly slower than a 3930, its power consumption would still make it the most worthy contender.

Thanks. :)
 


Many games find old dual, triple, and quad core CPUs to occasionally still be significant bottle-necks. Playable, sure, but in many situations, only barely to simply not great/ideally. There is plenty of good reason to upgrade if you don't already at least have an LGA 1156 dual core with Hyper-Threading and even then, many games are bottle-necked by such CPUs and need more like an LGA 1155 quad core to be truly smooth. Again, even then, it might need overclocking to smooth out some of the last few kinks in CPU performance bottle-necked.

So, there can be good reason to upgrade now if you don't at least have a Sandy Bridge quad core. It'd probably be a very different situation if CPU performance improvement hadn't slowed down so much where upgrades would not only be potentially significant, but also practical to not only get more performance, but also to use it.

The biggest hold-back for you with a six core CPU is that most games still don't make use of that many cores. Per core, an AMD six core CPU, although able to play every modern game out there and probably for the next several years, can still be a bottle-neck in many games simply because they can run into bottle-necks using only two to four cores effectively and even if they could use all six effectively, an overclock, like with the Sandy Bridge i5s, may be necessary to achieve perfect performance in the most intensive of gaming situations.
 
Status
Not open for further replies.