Why 8 or 10 cores?

Amberr

Honorable
Sep 14, 2014
162
0
10,690
More cores slow down your clock speed. On a consumer level... what is the point? I mean, I could see them being useful if you are encoding ALL the time or something. But outside of that do they have any placation?
 
Solution


Looks like you aren't familiar with the BOINC method for donating otherwise unused time on your computer to various research projects. My most recent computer has 8 cores, with hyperthreading that makes them behave like a total of 16 cores, and BOINC uses this to keep 16 programs running at once nearly all the time, mostly for helping various types of non-profit medical research.

For example:

World Community Grid
http://join.worldcommunitygrid.org?recruiterId=480838

I've been giving them computer time since 2008. I...
How do you get the opinion that more cpu cores slow down processing on any level?

Each core is set to run at the exact same clock speed as the other.
The problem is not with the fact that cpu`s to day have multiple cores but how the software is coded that can make use of them.

Its far harder to make a program, or a bit of software that will use a set amount of cpu cores than a single one.
Even a lot of programs that are still made, are programed to use a single cpu core.
If you have ever heard the term of threading it does not jut happen by magic that a program will use all of the cpu cores that are available on a set cpu.

Encoding and decoding of for example video streams and compression requires a lot of cpu time.
As you know rather than one cpu working on such a task. when broken down or the work load is shared can result in more data processed over a smaller amount of time.
Multi core Cpu`s have plenty of application, as long as the software or program is made or programed to use multiple cpu cores for processing.
 
i remember something about seeing people with certain amd 8 cores for example that managed to get higher clock rates than some 2/4 core intel cpus, granted they might just have been overclocking geniuses or had some nice silicone lottery luck,
but still, doesnt quite seem to always be true that "More cores = lower speed", factory set stock speed maybe,
but when you get a bad Intel quad core and it basically fries at stock speed boost, it doesnt really matter.
 
Heavy duty CPUs like those aren't intended for the consumer. Maybe the enthusiast.

AMDs FX line-up has 8 'core' cpus, but they are really four dual core modules with shared resources, so effectively 4 full speed cores, then 4 additional cores that cause extra clock cycles when scheduled. Similar to Intel's Hyperthreading in effect.

Intel has put a lot of effort into efficiency. This really shows up in the mobile sector (which is the same architecture as the 'consumer' grade desktop chips)

Most consumer software and games are built around single thread performance, so a single fast core is generally better. They don't make single core CPUs any longer, but a dual core is enough for most people to run a main app at full speed and the other to deal with background tasks.

Quad cores are a nice compromise with compute capability and single threaded performance. So the FX and i5 chips are your best choice in terms of cost/performance.

High core count doesn't necessarily mean a slower clock speed, but that is generally true. Though they do make High clock speed, multi core chips. These are usually several thousand dollars, and are similar to the $1000 X performance chip. Those are high wattage (135W and up) as compared to the more run of the mill 70-85W consumer class chips.

Basically servers processors are designed to handle many threads, as people will load the computer from many different sources (like a web server as an example) Then power efficiency and multiple workloads is more important.

It comes down to us as consumers to really set the market. They are making the chips we want as consumers.

So you won't likely be seeing high core count consumer grade chips in the near future.
 
On a consumer level, there aren't any 8 or 10 core CPUs so I'm a bit confused here. Consumer CPUs are mainly 2 or 4 core CPUs. AMD makes some FX chips that they market as 8 cores but they are 4 module parts and are closer to 4 core CPUs than 8 core CPUs. They have 4 complete cores (ALU + FPU), and 4 incomplete cores (ALU only, no FPU). Intel makes some 6 core CPUs for their LGA2011-3 lineup, but these are not mainstream chips - these are high-end / enthusiast chips. And we'll notice here that people do get the i7-5820K up to 4GHz just like with the quad core parts so one could flip it around and say what's the downside to more cores if the clock speed is the same? None, just price.

Simply put, more cores simply let you do more. Thing that's confused here a lot is that PCs do more than play games. So we'll discuss this in two ways:

1) Gaming. Most games can't use 4 cores, so the ideal gaming machine would be a very high clock rate 2 core CPU. However many games can and it's going to be a continuing trend that more do, so a very high clock rate 4 core CPU would be the best choice. If you are on a budget, it may make more sense to choose a very fast 2 core part over a slower 4 core part if the games you want to play are not optimized to use a 3rd or 4th core. If they are, then the existence of the extra cores will help and overcome that clock rate difference. You also get into CPU/GPU scaling which is why at lower budgets a Core i3 is generally a good idea. With a budget that does not permit a Core i5, you also do not permit the kinds of GPUs that would be limited by a dual core CPU.

2) Non-gaming. This includes general multi-tasking environments, rendering, programming, servers, and virtual machines. All of these benefit from more cores. Multi-tasking with 2 intense applications can only happen if you have 2 cores, preferably 3 so that the OS can still respond. If each of these programs can use 2 cores then you have an ability to use 4 cores. With rendering, it's generally the case that subobjects or frames can be done independently so 4 cores means working on twice as many components as 2 cores, and 8 cores would mean twice as many still. Hence, an 8 core 2.4GHz would perform somewhat like a 4 core 4.8GHz or like a 2 core 9.6GHz (the scaling isn't exactly linear due to some threading overhead and whatnot, but in these tasks it does scale very well). With programming, compilation time is reduced roughly by the number of cores you have. Just check out reviews of Intel's 22 core CPUs - they clearly compile the fastest despite their ~2.2GHz clock speed. And virtual machines are obvious - if you want to run two VMs on a 4 core CPU, then you can give each VM one core and keep 2 for the host. This gives you two single CPU-like VMs with all the shortcomings of 1 core systems and a host that only has 2 cores free. With an 8 core CPU, you can give each VM two cores and keep 4 for your host so your system still is usable like a quad core while each VM is a dual core and is free from the problems of single core CPUs.

The basic point is that clock speed vs core count is a curve and each person's intended work load will mean that their ideal point on that curve is different. If you want to play games, then a high clock rate 2 or 4 core CPU is best and a 5th, or 6th core isn't useful so they would only make sense if they (a) can provide the same clock rate and (b) cost the same - something which clearly won't happen. If you want to run VMs, do rendering/encoding, or lots of compilation (ie, a workstation) then you probably favor core count. One particularly interesting trick here is to use a dual CPU system and have each CPU be a high clock rate 4 core CPU, giving you 8 cores at a higher clock rate than you can get from an 8 core CPU. You see Intel's Xeon-W chips tend to be monsters, and rightly so at $2500 or so.

Also I don't think you meant "placation" - perhaps "purpose"?
 


Application I assume, autocorrected.

Though the CPUs must be placated through the sacrifice of electricity. Otherwise the little men inside won't deliver your cat videos.

 


There are 8 core consumer CPU's, the 5960X and K come to mind. And they plan to release the 10 core soon. And for servers, if you are ruining multiple servers on the same CPU. And I meant "application" there, lol. Sorry! But with all of this, other than a lot of encoding, I do not see why the high end CPU's would be useful outside of industrial computers. Maybe there is something I do not understand.
 
You're absolutely right - most consumer applications don't make use of more than 1-4 cores, and CPUs with more cores tend to have lower clockspeeds, or it would be difficult to dissipate all of the heat from a single chip, and you'd end up with either a massive cooler, or a very noisy one. This is why Intel has been adding more and more iGPU power each generation, rather than more cores; a modern Intel CPU only uses about 25% of its die space for the CPU cores.

There's little point for most people to have more than 2 cores with hyperthreading, so we see 8+ core chips generally only making their way into servers.
 


Seems legit. And I take it they are not even used in consumer servers either? Such as mine craft servers or Team speak or the like. As they can be hosted on one core. I mean, maybe the cards I am talking about are not really consumer grade. To me something you can get at a store is Consumer grade. And I have seen 5960 CPU's at a shop. So I just assumed they are consumer grade.
 
The number of cores your chip has is not a factor for the clock speed.
A. There are processors that have a base clock speed and what is called turbo boost.
B. You can over-clock some processors to run faster than normal all the time.

The number of cores your processor has determines how many calculations can be performed at one time.

So lets say you have a one core processor with no hyper threading. All software (OS, programs) running will have to go through that core, which in some cases causes unacceptable performance. Higher clock speeds can sometimes help, but the results may still be unacceptable.

Going with a higher number of cores allows the work load to be spread out, increasing performance to an acceptable level.

When you state "On a consumer level... what is the point?", you have to justify that statement by clarifying the work load. There are different software programs of varying complexity for which the number of cores greatly affect the end result of that software programs.
 
Virtual machines. I play with virtual machines at home all the time, if you are playing with multiple vm's at the same time you will want as many cores as vm's if you can for good performance. Video encoding, compression, running game server, running ftp server. These CPU's are for advanced, not just enthusiast, users. There are plenty of people whom use professional software on home machines that they also game on, or just need/can use it for professional software on home machines.
 
Although you're right in the strictest sense that more cores do not determine clockspeed, they're part of the equation. Consider why we see the 5960X, Intel's most expensive (consumer) CPU, with 8 cores, having a 3GHz base clock and 3.5GHz turbo, while the 6-core 5930K has a 3.5GHz base clock and 3.7GHz turbo, and the 4790K, the fastest 4-core Haswell CPU (though admittedly in a different socket) is 4.0GHz base and 4.4GHz max turbo.

 
People keep saying that the number of cores does not effect clock speed. Can someone brake this down to me? And it is not true that your performance will suffer if the processes does not use all the cores? I am sorry, I am just trying to understand.
 


Look at this this way:

Highest stock clocked Pentium 4 CPU [P4 570]: 3.8 GHz
Highest stock clocked 8-core CPU [FX-8350]: 3.8 GHz

The number of cores has little to do with affecting clockspeed. Maximum clock is a function of the CPU architecture. Simply example, the Pentium 4 was deeply pipelined, which allowed for it to reach very high clockspeeds. By contrast, the Core 2 series was not as pipelined, which resulted in lower overall clocks. Another example would be the Phenom II by AMD versus Bulldozer; Bulldozer was deeper pipelined, which allowed higher clocks despite having more cores.

It's true if you don't use all the cores, you don't use the CPUs maximum performance. Performance of a CPU is determined by three main factors:

-The Clock of the CPU
-How many average instructions the CPU can execute per clock cycle
-How many computation resources (Cores) the CPU has

Take Intel versus AMD CPUs today. A FX-8350 has more cores and a higher clock then an i7-4770k. However, the 4770k does a lot more work per clock cycle, and as a result, the Intel CPU is often significantly faster despite it's lower clock and lower core count. The only time this does not occur is when all the cores of the FX-8350 are utilized, in which case the AMD chip tends to be faster.

It's worth noting that most software can't utilize more then two or three CPU cores, and as a result, both AMD and Intel are moving to CPU cores that do more work per clock to maximize performance. AMD Zen architecture that is due to come out early 2017 is promising 40% gains in the amount of work done per clock, for example.

In any case, getting back to the point, more cores doesn't hurt performance or affect clockspeed, though obviously you don't get extra performance if those cores aren't used.
 
It is a factor of heat and energy. The higher the clock speed, the more heat and energy, the more cores, the more things generating heat and energy on the chip. The more higher clocked things they put on a chip, the more heat they have to deal with. Better heat sinks/coolers cost more money. You can have the 8 core chip at a really high clock speed, but now you need a way to cool the chip if it is running all the cores on full load at that clock speed. Sometimes there is too much heat generated.

The higher the clock speed, the more power you need. This can mean if you don't have enough power, the CPU becomes unstable and malfunctions. There is also a limit to how high you can get the chip to perform despite all the power in the world, eventually it becomes less efficient to pump more power through to try and get a few more Mhz.

Neither is dependent on each other, but they do effect each other in terms of heat and power usage.
There is also I believe some stuff on leakage and interference, but that is more significant in how many cores can fit on this chip and power efficiency when trying to get more clock speed in more an architectural thing in the design side and stability of the chip.

Now onto programs, programs run in threads. A thread is basically a list of things to do/instructions. 1 core will process 1 thread at a time, be aware that modern processors will switch between threads very quickly, but they are essentially only working on 1 thing at a time. If you have 2 cores, you can go through the instructions of 2 separate threads at the same time. How fast you can go through the instructions is based on (work that can be done each cycle * how many cycles can be done in a second) Hz, the measurement we use for cycles per second. IPC, instructions per clock, how many things we can do each cycle.

A program can have many threads, or one. But programming an application to do more than one thing at a time is complex, typically when you are doing things you are thinking about doing them procedurally, not all at the same time. So most programs have a single thread, so no matter how many cores you have, you will see no benefit(this is only if you only had 1 thing running, if you have a dozen programs/services running, that is at least a dozen threads). But if you split workload evenly between 2 threads, you have 2x the performance if you have 2 cores to process them at the same speed. How fast the cores are, again, limited by money, architecture, electricity, and how much heat you can disperse. They make many flavors of CPU's, because more cores == more cost. Higher clock cycle == higher cooling. Also, not every wafer is equal. There are defects in every batch of CPU's, and the defects can determine if they can run high clock speeds at a stable level, sometimes they even have to disable cores due to defects. This is binning if I remember wordage right, which is why sometimes you see the same cpu, sold at different price points with different clock speeds as different models. Or the same models, but a core disabled. Some of them are binned down because they need more for demand to hit a pricepoint, some because they are just slightly "defective". Not to say they don't run at advertised speed fine, because they will "'m certain, but they may not be capable of more due to detected defects.

This is how I understand it, but I am less proficient in the Hardware aspect of computers than the software side.
 


A good explanation, but remember that it's often not trivial to break up workloads between threads. Most workloads do not scale in any appreciable way. You also have to worry about inter-thread communication, memory access issues, and so on. As a result, doubling the amount of CPU cores doubles the THEORETICAL maximum performance of an application.

But yes, we've primarily moved to multi-core systems simply because of heat/power draw requirements. Take a generic 3.0 GHz CPU as an example: There's two ways to double maximum theoretical performance:

-) Clock the CPU at 6.0 GHz
-) Double the amount of CPU cores

The first can't be done due to power/heat requirements, so the second is used to increase maximum performance. The downside is, if you are doing one heavy workload that does not scale between multiple CPU cores, your application performance will be lower then it would be if you clocked the CPU at a much higher speed.

Some tasks scale really well. Video encoding is a good example; each frame can be encoded totally independent of the others, so the encoding process can scale up to the number of frames that need to be encoded. Games, on the other hand, are much harder to scale beyond a few threads, which is why performance is dominated by stronger CPU cores.

Finally, it's worth noting that on Windows, there's literally hundreds of threads running in the background that will periodically interrupt your applications to do their work. I won't go into detail on how the Windows scheduler does this, but needless to say there is a performance impact if your application has one of its threads replaced because the OS needs to get some kernel thread executed. Obviously, adding more cores reduces (though doesn't eliminate) the chances this occurs.
 


Looks like you aren't aware of any BOINC projects, which allow consumers to donate time on their computers to (mostly nonprofit) projects that are set up to be able to use such computer time instead of taking centuries for the number of computers they can afford to finish the work. I've recently bought an 8 core computer to help various BOINC projects, mostly those involved in medical research.
 


This is a noble aspiration, but electricity is relatively expensive where I live, and I'm a young person without much financial security yet. A high-end PC drawing ~500w @ 16 cents per kwh works out to around $2 per day, or $57 per month, or $700 per year of electricity donated to these projects, and I really need that $700 for other things at this stage in my life.
 


Looks like you aren't familiar with the BOINC method for donating otherwise unused time on your computer to various research projects. My most recent computer has 8 cores, with hyperthreading that makes them behave like a total of 16 cores, and BOINC uses this to keep 16 programs running at once nearly all the time, mostly for helping various types of non-profit medical research.

For example:

World Community Grid
http://join.worldcommunitygrid.org?recruiterId=480838

I've been giving them computer time since 2008. I have very little interest in gaming or VR.

 
Solution