Core i7-4770K: Haswell's Performance, Previewed

Page 11 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

bigj1985

Distinguished
Mar 12, 2010
331
0
18,810
I just see no point in upgrading from my 2600k for this. I've been sitting pretty at 4.5 ghz over a year now w/o a single hicupp in anything CPU intensive. This 2600k just eats everything I run alive and I'm barely pushing its limits.

If I were to upgrade based on these results I would shave a few measly seconds in heavily thraded workloads at best since I care nothing for the IGP.

I was saving up to go Haswell but from the looks of things I might as well blow it wad all over a 4gb GTX 680
 


To be fair, the same logic that tells you to not go for an i7-4770K when you already have a decently overclocked i7-2600K tells you not to go for a GTX 680 4GB over a GTX 670 2GB.
 


LOLs... I wanted to say that too... BUT anyway...
 

pepe2907

Distinguished
Aug 24, 2010
643
0
19,010
"@twelve25 But who does Intel really need to convince here? Trying to chase after people who upgrade every year is a fools errand..."

And then there are the reports of 14% yearly decrease in PC market...
Intel may think they play smart, but nobody is buying - literally. And not only Intel suffers.
 

bigj1985

Distinguished
Mar 12, 2010
331
0
18,810



Ya lol i actually totally meant the 4gb 670 NOT the 680 anyway.
 
Overall it looks like HD4600 is 5-15% faster than HD4000 depending on the usage and application. 3DM 11 being pure synthetic showed HD4600 as being nominally better than HD4000 which is extremely disappointing. X86 and Memory subsystems look improved but not spectacular. I wonder what Intel will do to market these. If you have a Sandy or better a Ivy then you really have only a sidegrade as x86 is slightly better while nobody is going to use a iGPU on a $250 and $380 part.
 


Good... :)
 

MakisPswmiadhs

Honorable
May 3, 2012
1
0
10,510
I am waiting to see when it's going to be worth it for me to upgrade my Core 2 QX9650. The performance gain over IB is still too minimal to make an upgrade to be worth it. Seeing as there is a seemingly inexaustible supply of Socket 775 MOBO's on E-bay, looks like I am going to be sitting on my QX9650 for a loooooong time still.

The only reason for me to upgrade would be if my CPU would be unable to play newest MMORG's on the market at 1920X1080 ULTRA, but seeing as I can still play WoW and SWTOR at said settings and get still decent performance/FPS with my GTX 580 SOC, there is still no point in upgrading.
 

JonnyDough

Distinguished
Feb 24, 2007
2,235
3
19,865
The funny thing about CPU previews/reviews today is that they don't really matter for most people anymore. That is to say that if you were to show a performance graph for gaming, internet, and video/audio encoding/decoding with all of the processors over the last ten years you'd come to realize that if you already own a chip built within the last four-five years there's almost no point in upgrading. My old AM2+ chips that I bought for $35 recently don't do anything differently for me than my Core i5. Systems really have gotten "fast enough" and we're waiting on software and drivers that are optimized together to really show us something different than the applications we already have. ie, when it comes to switching between windows or whatever on my Android phone, I actually prefer it to slide rather than carousel. Sometimes simplicity is a great thing. Kind of like Windows 8 and touchscreens. The mouse and keyboard are bad for my hand/wrist, but they are pretty intuitive. When something works well for the consumer (which I am) I see little need to replace it. My car works fine, gets decent gas mileage, and it's safe enough. I don't need a shiny new high end car. I also don't need Haswell. Although in a few years it WOULD make sense for me to upgrade, if only to save a few more seconds, run cooler apps, and lower my electric bill. For now though, it's irrelevant to me.
 

JonnyDough

Distinguished
Feb 24, 2007
2,235
3
19,865
Am I the only one with the feeling that Intel is holding back some of their technology? I mean, sure they suck at graphics but why show all of their cards if there's another round to play? They will maximize profits, and that means that until they see what AMD comes up with they won't be putting out chips that eclipse their current lineup by much. Why would they? There is money to be made!
 

anubis44

Distinguished
Jul 22, 2008
71
0
18,640
[citation][nom]de5_Roy[/nom]nice preview!i was expecting something richland related. this came outta nowhere.[/citation]
[citation][nom]mayankleoboy1[/nom]Because their performance sucks in comparison to the latest Xeons, as tested by Anandtech a few days back.[/citation]
Is the 8350 10 times worse than the Xeon? If not, you're being obtuse, because it's 10 times cheaper.
 

where in my post did i say fx8350 is 10 times worse than xeon?
 

ojas

Distinguished
Feb 25, 2011
2,924
0
20,810

I do feel they're holding back on technology...but i think they're waiting for the right time.

I mean, how many programs use more than 8 threads? So they make Xeons 8C/16T parts, those that don't make the cut are sold as Extreme edition CPUs.

Selling 6 or 8 cores for mainstream at the moment doesn't make a whole lot of sense for them i suppose. Those chips are pretty big and complicated too, so i guess it's too expensive to mass market them.

I remember a slide showing a 15-core Haswell-E part, so maybe Haswell-E is when i7s cross 6 cores, and with Skylake they'll start offering 4/6/8 cores for mainstream...software would have caught up by then, and with DDR4 it would give them a much better deal to market to enthusiasts complaining about "only" 15% improvements year-over-year.

2015/2016 is also around the time in which current gen consoles should hit their peak in terms of the performance they can output, AMD will move fully to HSA too...Intel will HAVE to make 6/8 cores mainstream, if not go the HSA route itself.

I do disagree with you on the fact that a 5 year old processor is still viable today for anything other than basic productivity, my Q8400 is a bottleneck now in a lot of things i do. Looking to move to Haswell, i7 probably.
 
they aren't competing with AMD anymore. They're competing with themselves and ARM.

You won't see more cores till they decide they need less energy efficiency and larger sized chips. Right now their goal clearly is to scale the Core i down to a size and energy consumption number to compete with ARM, not to scale up desktop performance.
 

InvalidError

Titan
Moderator

Multi-cored or multi-threaded CPUs have been around for almost a decade now and there is still very little mainstream software that makes meaningful use of more than one thread.

Since a decade proved to be insufficient for most PC developers to adopt meaningful pervasive multi-threading, I doubt adding even a decade more will have much effect on that beyond high-end games, scientific applications and other generally non-mainstream compute-intensive tasks.

Sure, runtime environments and support libraries are spewing out dozens if not hundreds of threads but the bulk of processing time still ends up in one, maybe two threads.

Using Process Explorer to spy on my programs' threads, I often see that while many have 20-50 threads, most of their CPU usage is concentrated in only 1-2 threads with the rest being under 0.01%. The game I am currently playing has ~24% CPU load on one thread (practically full-load on one core) and 4% in a second thread with the remainder of threads being either sleeping or sub-0.01%... technically multi-threaded but clearly not getting much benefit from it.
 

flong777

Honorable
Mar 7, 2013
185
0
10,690


That is exactly the problem with choosing the I-7 3960 ($1000) over the I-7 3770K ($320). In most applications the 3770 matches the 3960 and in some benchmarks BEATS IT. The two extra cores on the 3960 are just not getting used in most applications.

 

ojas

Distinguished
Feb 25, 2011
2,924
0
20,810


Not sure you can compare the two time periods. Remember, mobile is already in the quad core stage, consoles will be sitting on 8-integer cores, and the minimum core count of any PC CPU today is 2.

This decade is THE decade of multi-execution-thread software. Did you know, for example, simultaneously opening 50+ tabs in chrome leads to at least 4 threads being used to the max? Not something i do every day (in fact rarely) but i find it interesting that it's there.

With x86 reaching almost all computing segments where parallelism can be used, i think we'll start seeing more of it.

Also consider this white paper from Intel that was presented in 2005:
http://epic.hpi.uni-potsdam.de/pub/Home/TrendsAndConceptsII2010/HW_Trends_borkar_2015.pdf
 

InvalidError

Titan
Moderator

The "minimum core count" in PCs has been two for several years already if you count SMT and the main reason mobile/low-power CPUs have 4-8 core is to try making up for their ~1.5GHz clock rates, lower IPC, smaller caches, etc. without that much success since many dual-core models get better scores and user reviews than quads.


The problem with that statement is that individual cores on modern conventional CPUs are already fast enough for a huge number of applications to run fast enough that developers have very little motivation for bothering with the extra complexity of making the code explicitly threaded in any significant (performance-wise) way.

If you read the Android developers' guide, most recommendations about threading aren't about getting more performance out of the CPU but about deferring/delegating trivial stuff that would stall the main thread like (down)loading images. That sort of threading does not provide much extra performance but makes applications feel more responsive and it would still deliver most of its benefits even on single-cored CPUs since the main objective here is to delegate the task of waiting for storage or network to the runtime/OS.


Tabbed browsing is a rather poor example of multi-threading since each tab is an almost entirely independent process which means it poses no particular threading challenge (you are basically solving 50 isolated problems) and at the end of the day, only the single visible tab matters and the real-world benefit of that architectural quirk is slim to none.

If you want a threading challenge, try parallelizing individual instances gcc or similarly intrinsically serial code where everything that happens later depends on what happened earlier. Most code that relies on user input or contextual interpretation of input streams fall in this category. This is the sort of code that gives nightmares to people who need more performance out of it.
 

Matt Shapiro

Honorable
May 2, 2013
1
0
10,510
personally, i'd only switch my SB for a IB just because new graphics cards are 3.0 express. this new tech is identical pretty much minus the very low power consumption. Some one said most people will wait til they are 4.2-4.5 before thinking to upgrade. I can easily say with the socket switch too, I'll wait
 


yes.... but they're backward compatible with pci-e2 and currently, not a single gpu out there (even a titan, 7990, or 690) can saturate a pci-e 2.0 x16 connection (and those three will just barely saturate a pci-e 2.0 x8 connection)

PCI-E 3.0 is a marketing gimmick. it will be 2-3 more generations of GPUs before it will show even a slight performance boost over pci-e 2.0x16.

 
Status
Not open for further replies.