CPUs: GHz or Cores?

xxcrankzgamesxx

Prominent
Oct 19, 2017
3
0
510
So the 8th generation of Intel CPUs came out for the i3 5 and 7. I've seen that the i3 has 4 cores with 4.0 Ghz. The i5 however has 6 cores with 3.4-6 ish Ghz. So which is better?
 
Solution


i'd say the I5, more Threads = better spread workload.
if you are a multitasker the more cores come in handy.
if you don't do a lot at once, or just very simple stuff, you'd be better off with the i3, and a tad cheaper.

but to be real, 4 cores is a fine amount, so unless you decide to do anything like photoshop or really CPU heavy games, the difference will be minimal.

It depends on your budget and the condition.
If you want to play games and normal photo editing and video editing, The i3 will be your main choice.
But if you want to edit like 1080p videos and want less render times and also want good fps in most AAA titles then you can go after i5. You can compare both CPUs at game-debate.com there you will find most of the difference and details about them.
 

No desktop i5s, coffee lake or otherwise, have hyperthreading.

@OP the i3 will be slightly better in lightly threaded loads, but the i5 will be significantly faster in more heavily threaded loads. I would definitely choose the i5 in almost any situation. There's a reason it costs more.
 


It depends on what you are planing to do with the CPU. The i3 8350k is locked @4ghz on all cores all the time where the 8600k will boost up to 4.3ghz under light workloads. The i5 is "better" due to more cores and a higher boost clock.
 


i'd say the I5, more Threads = better spread workload.
if you are a multitasker the more cores come in handy.
if you don't do a lot at once, or just very simple stuff, you'd be better off with the i3, and a tad cheaper.

but to be real, 4 cores is a fine amount, so unless you decide to do anything like photoshop or really CPU heavy games, the difference will be minimal.
 
Solution
Modern day titles are utilizing more and more of your CPU. Today's programs and games will always benefit from more cores. If you think about it mathematically there is a bit of a connection - picture 4 cores at 4.0Ghz. That is 16Ghz total processing power. Now picture 6 cores at 3.8, nearly 24Ghz power. Not completely how it works but you get the picture. If you are looking to play games at Med/High and do light to moderate work on your PC the i3 will be fine. If you are looking for the best performance and futureproofing the i5 will be your best bet.
 
Don't look at GHz or cores... In fact, GHz haven't really changed much in the past 20 years, they're still staying around 3 and cores are just dividing the transistor count, so having two cores instead of one doesn't equal 2x power, in fact it actually makes the CPU a bit slower, they're not making extra cores because of power (unlike the guy above said about having 16GHz, lol..). Look for real benchmarks from well known platforms, such as https://www.anandtech.com/bench/CPU/1603

 
@lauris3722 in well threaded CPU bound applications doubling the core count can very nearly double the performance. In the link you posted, if you compare an i7 7820x to a 7700 (so same architecture, nearly same clock speed, double the core count), the cinebench score is almost exactly doubled.
 
@lauris3722 meant to say same micro-architecture. As in the 7820X should have about the same instructions per clock as a 7700. Both CPUs are based on the Skylake u-arch (well, 7700 is Kaby Lake, but Kaby Lake is just Skylake on a more refined process).

But you're right, 7820X has different per core cache sizes and quad channel memory. So if you don't like that comparison, look at i3 7300 vs i7 7700 then. Same micro architecture, same amount of cache per core, same frequency (looking at the 4 core turbo speed of the 7700), but double the core count. Cinebench scores of 864 vs 429.
 
I don't want to get into specifications right now, but.. If you're right, how is that possible? Correct me if I'm wrong, but having two cores versus one basically enables your processor to do two calculations at the same time, but at 2x slower speed since you're not using CPU as a whole, but rather a split part of it (core). This actually caused problems for old games like C&C Generals Hour where multi-threaded CPU's lagged like hell (since the game wasn't designed for it), meanwhile single-cored CPU ran perfectly because the old CPU was actually more powerful than one of four cores on significantly newer CPU. The whole idea behind multi-threading is to reduce latency (and thereby improve caching), power usage and temperatures, but that could not possibly double your CPU power..
 
Yes, having two cores allows your CPU to perform two calculations at once. But they're done at full speed, not half. In a dual core CPU, all the execution resources are duplicated in each core*, so you're not splitting any resources by using both cores (the cores can't combine the resources to do things twice as fast when it only needs to do one calculation at a time).

If older games performed worse on a dual core, I can only assume it was because either:
The dual core was clocked slower than the comparable single core (so if the game only took advantage of one core, which was probably the case for old games, it would be slower);
The CPU was hyperthreaded (older implementations of hyperthreading weren't that great, and could lower performance in some cases);
There was some weird interaction between the older software and the dual core architecture that hurt performance (maybe it would bounce between cores or something?)

*In some cases not all resources are duplicated. For example, in AMD's old steamroller architecture one floating point unit was share between two cores which formed a module.
 
all the execution resources are duplicated in each core*, so you're not splitting any resources by using both cores
This got me confused. Could you elaborate? What exactly do you mean by resources? Surely not processing power? And surely not duplicated, but shared (If you're talking about cache)?

The dual core was clocked slower than the comparable single core (so if the game only took advantage of one core, which was probably the case for old games, it would be slower);
The one-singled CPU I had was Sempron 3400, which had 1.8GHz. The one I compared to was Phenom 965, which I had OC'ed to 4.2GHz. It performed much worse in a game that could only use 1 core.

This is how CPU usage looks in a game from 2009 (popular title called S.T.A.L.K.E.R.)
D1h157u.png
 

No, I mean duplicated. Instruction decoder, arithmetic logic unit, floating point unit, branch predictor, etc. L1 (and I believe L2) cache are also designated to each core (with L3 cache being shared among cores). Basically all the building blocks that are used to execute an instruction are duplicated in each core.

I have no idea why that game performed worse with the Phenom 965. Although CPU usage looks like what I'd expect from an older, single threaded game i.e. 100% usage of one core and next to nothing on the others.
 
Thank you for shining light on this.. As it turns out, I didn't actually know how cores worked, I was positive it actually slowed down CPU's, but that type of architecture actually makes more sense and indeed would improve performance in most cases.
 
The i5-8400 is better than the i3-8350K in most cases, but in some cases, the i3 wins due to the capability of overclocking.

Skylake (server) isn't the same architecture as Skylake (client). Get your facts straight.

Again, no, they're not the same micro-architecture. Skylake (server)'s cache structure (and performance) changed drastically, and execution units and data paths got updated to 512 bit.
 

TRENDING THREADS