Core i7-4770K: Haswell's Performance, Previewed

Page 7 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810
^ Which brings the argument , if it is OK to offload graphic stuff to a CPU, so that it frees up the GPU a bit.
But GPU perf has been increasing at a rate of 50-60% every node shrink. So your code thats bottlenecked by a GPU today, is bottlenecked by a CPU tomorrow. And since the CPU is only 10% faster, teh bottlenck is bigger than ever.
We have lovely discussion going on here : http://www.tomshardware.com/forum/352312-28-steamroller-speculation-expert-conjecture/page-42
 

zanny

Distinguished
Jul 18, 2008
214
0
18,680
Looks like it is finally about time I upgrade from my 920. We'll see how prices are at launch, though, and how the 700 and 8000 series gpus benchmark. Maybe I'll just hold off till Broadwell, since I'm barely taxing my current chip...
 


We're not expecting the next generation of desktop graphics (at least high end models) until the end of 2013 at the earliest and early to mid 2014 at the latest, so the wait for the next graphics generation will probably be a long one.

As for your CPU, it should get better as games get more well-threaded, so I'd recommend just holding off on upgrading unless Haswell is priced much better than Sandy and Ivy (I doubt this will happen unless Steamroller improves even more than I suspect and AMD continues aggressive pricing).
 

thebeastie

Distinguished
Oct 14, 2006
17
0
18,510
I was under the impression Haswell has some lower TW chips? These still look destined to cause x86 higher performance tablets to be super chunky to their arm counterparts.
 


The higher TDPs are because of the VRM being integrated into the CPU die, meaning that more components from the motherboard have been put into the CPU. Total system power consumption will have still probably gone down as a result. So, your assumption is completely wrong. This can give Intel an advantage, not a disadvantage, in power consumption and efficiency.

Furthermore, TDP does not equal power consumption anyway. They are two very different things with only a loose relation.
 

rrbronstein

Honorable
Jan 2, 2013
68
0
10,640


It wasnt a major issue, something thats good to catch now so if can be fixed upon release. It was just something to do with sleep state and only affected adobe, you act like its some major performance crushing factor, very negligible factor.

 
As has already been said quite a few times by now, no one with a SB/IB K chip should even bat an eye at Haswell. It's just not worth it, when you're OC'd to 4.5+ currently.

I myself am still waiting a while. My 2500K @4.5 is still humming along quite nicely, thank you.
 

cangelini

Contributing Editor
Editor
Jul 4, 2008
1,878
9
19,795
[citation][nom]devBunny[/nom]@Chris: Thanks for the review. I, like many here, have been waiting keenly for the first glimpse. :)When it comes to reviewing the production chip, could you use a 3930 instead of a 3970, please?The 3930 is the 6-core workhorse, best bang for the buck, so people like myself (number crunching, little or no gaming) are interested in answering the question of whether to buy another 3930 for the stable or a Haswell (or to wait for Ivy Bridge-E, but that's a different question, and still one for Intel). The 3970 is irrelevant to the budget-conscious data miner due to its price and so comparing with one doesn't help answer the question. The other part of the question is whether, even if the Haswell is slightly slower than a 3930, its power consumption would still make it the most worthy contender. Thanks. :)[/citation]
Certainly--I'll be adding more competing processors and not worrying as much about matching clock rates and stuff. I do appreciate the feedback!
 

cangelini

Contributing Editor
Editor
Jul 4, 2008
1,878
9
19,795
[citation][nom]mayankleoboy1[/nom]Any updates here, Chris ?[/citation]
I have the automation team in Germany working on it for us. I gave them the heads-up as soon as I saw your comment last night :) Not going to matter until the official launch, though, since I won't have access to Haswell again until then, more than likely.
 

gamebrigada

Distinguished
Jan 20, 2010
126
0
18,680
[citation][nom]twelve25[/nom]Obviously with AMD striggling, Intel has no need to really stretch here. This is another simple incremental upgrade. Good jump from socket 1156, but I doubt many 1155 owners will feel the need to buy a new motherboard for this.[/citation]

Semi not true. With Xbox Infinity being rumored to be AMD based, PS4 confirmed AMD based and Wii U already AMD based, I truly think this is a really honestly big win for AMD, and will give them some backup cash to get their ideas on the right path. I honestly think intel is gonna start having its ass kicked and we're finally gonna have some real competition.
 


AMD getting money in doesn't mean that they're not struggling and it most certainly doesn't stop their CPUs from falling behind in per core performance and load power efficiency. I'd wait to pass judgement until after we can see more of the big picture for what's going on.
 

Sudhakar2k

Distinguished
Dec 13, 2008
5
0
18,510
I've hold off building my new PC for a long time, and I have to say the wait has been a huge disappointment. While I think the 7-13% increase at the same clock speed is acceptable, everything else is not.

First there are no clock increases. Two and half years and intel is stuck at 3.5 GHz. I was hoping that high end haswell would be clocked at 4 GHz. Second, all of the hype about the GPU was just that hype. After all of the waiting, the improvement is a paltry 25%? At the very least you'd home one or two desktop chips would have the GT3 graphics, so users would have a power efficient way to get both CPU and GPU performance.

The problem here is that Intel is playing it to conservative (because they are a Monopoly). And the consumer is suffering because of it. This coming from and Intel Shareholder.
 

tului

Distinguished
Aug 20, 2010
193
0
18,680
[citation][nom]cangelini[/nom]Thanks--and yeah, VT-d is being excluded from these K-series parts, too. Funny thing is that it'll be enabled on the -4770, but not the -4770K.[/citation]
Typical corporate greed. Governments do lots, but won't step in to protect consumers from such practices. What a waste.
 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810
[citation][nom]cangelini[/nom]I have the automation team in Germany working on it for us. I gave them the heads-up as soon as I saw your comment last night :) Not going to matter until the official launch, though, since I won't have access to Haswell again until then, more than likely.[/citation]

And this is why i love Toms :)
 


Once the National Processor of China is hashed out, Intel will have its competition.
 

hixbot

Distinguished
Oct 29, 2007
818
0
18,990
So the Clock-for-Clock bench shows only a 3% improvement over IB in a single thread app. Is that within the margin of error?

Is it safe to assume that the more substantial improvements seen in the other benches are IPC related, or could they be do to a more aggressive Turbo?
 

those have been there for a while. sandy bridge had i5 2390T, i forgot the exact model but ivb had a core i5 T too. nehalem/westmere had dual core i5 (e.g. 655k), but no T cpus iirc.
edit: ivy bridge had core i5 3470T. all T cpus have 35w tdp. T is for [strike]totally gonna price-gouge customer for low tdp desktop cpu[/strike] 'power optimized lifestyle'.
 


Ouch, they should just call it an i3 + HT... Though I suppose those would sell less compared to calling it an i5 to the masses.
 

according to intel, desktop core i3 [strike]shouldn't[/strike] doesn't have turbo boost, so cpus with turbo boost get i5 moniker.
 

InvalidError

Titan
Moderator
[citation][nom]icemunk[/nom]5-10% improvements are meh. I remember the days of 50% increases between generations.[/citation]
And most of that improvement came from a ~40% increase in both clock speeds and power draw until CPUs started hitting the 125-150W range at which point governments and environmental agencies all over the world started demanding more power-efficient computers, servers and datacenters.

Since most power-efficient and area-efficient architectural improvement tricks have been tapped out and silicon appears to have reached its practical mass-production yield limits at 3.5-4GHz, small improvements (aside from adding more cores/threads) are pretty much the only thing left to look forward to.

Since today's Pentiums are already overkill for most people's everyday needs (which wasn't the case yet back in the 125W Netburst's days), there isn't much of a point in sacrificing power-efficiency for higher clocks... even more so today, with almost everything going mobile or embedded.
 
Status
Not open for further replies.