ZTE Apache May Become First Eight-Core Smartphone

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]A Bad Day[/nom]The problem with single-cores is that as you scale up the clock rate, you have to also increase the voltage, which really kills energy efficiency. If a software supports multi-core, a quad-core processor clocked at 1 GHz would be more efficient than a single core clocked at 3.5 GHz.The only issue is that a lot of mobile applications support only dual core at best.[/citation]

A dual-core at 2GHz-2.5GHz would be a great meet-me-in-the-middle, assuming an otherwise identical SoC for all options here :)

Even better with quad-core models is that they're incredible for multi-tasking and such even if not great for individual applications since most don't scale well across more than one or two threads (although at least on PC/Mac, that's actually improved greatly and most common types of applications have at least one or two excellent programs that scale very effectively).
 

livebriand

Distinguished
Apr 18, 2011
1,004
0
19,290
[citation][nom]coder543[/nom]again, this will NOT BE RUNNING 8 CORES AT ONCE.Only 4 will be active at a time.[/citation]
So what's the point? Am I missing something here?
 

agnickolov

Distinguished
Aug 10, 2006
520
0
18,980
[citation][nom]livebriand[/nom]So what's the point? Am I missing something here?[/citation]
You, and everybody here, are missing is quality journalism. The article should have explained that the so called 8-core CPU in reality has only 4 simultaneously active cores, the other 4 are always dormant / switched off. So as far as applications are concerned this is a quad-core CPU.
 

razor512

Distinguished
Jun 16, 2007
2,130
68
19,890
Why cant they use the modern CMOS and CCD tech to make a 4-5 megapixel sensor so that larger pixels could used which will allow for less image noise at higher ISO, (at least the pixel density of a standard APS-C sensor)

While higher megapixels may be a selling point, I have never seen a cellphone camera that offered anything close to it's rated megapixels in terms of detail resolution. Furthermore, I have an old 6 megapixel DSLR that offers significantly more image detail and color quality than any of the current sample cellphone images.

A high resolution is useless if the camera can not resolve details down to the 1 pixel range.If the smallest detail takes 2 pixels then your effective resolution is half, meaning a 12megapixel camera functions more like a 6 megapixel camera, (many low cost point and shoot cameras and cellphone cameras may have their smallest detail being 3-4 pixels in size, making the effective resolution to be 1/3rd to 1/4th of the advertised resolution.

if they were to use a lower pixel density on the CMOS sensor, they could make a cellphone camera with significantly better low light performance, or use better sensor elements that can capture color and luminance info at a greater bit depth. This will significantly improve image quality more than resolution.

For example have you ever taken a pixture of something using a point and shoot, then see a similar object taken with a DSLR and the differences are night and day in terms of the definition of the details ( eg taking images of people), the DSLR looks almost 3D in terms of the depth to the objects (not talking about depth of field)

that is due to the higher bit depth and better dynamic range of the sensor. the DSLR will allow more individual tones to be present across a single object or someones face, then makes images look less flat. To get those results you need better sensor elements which take more space (if DSLR's used the pixel density found in cellphone cameras, we would be in the 400 megapixel+ range, it is not done because it will not yield better results than a sensor that can offer 16 bits per channel and 13 stops of dynamic range, compared to a cellphones 8 bit or less per channel and 0.5 to 1 stop of dynamic range.
 

Pherule

Distinguished
Aug 26, 2010
591
0
19,010
I don't really care about 8 cores on a laptop at this stage, but I certainly do want a 64 core desktop machine! Intel started experimenting with 40+ core CPU's years ago. They're just milking the market for all it's worth. Sandy Bridge's i7's should have been either 6 or 8 core by default already, and Ivy Bridge i7's 8 to 12 core.
 
[citation][nom]Pherule[/nom]I don't really care about 8 cores on a laptop at this stage, but I certainly do want a 64 core desktop machine! Intel started experimenting with 40+ core CPU's years ago. They're just milking the market for all it's worth. Sandy Bridge's i7's should have been either 6 or 8 core by default already, and Ivy Bridge i7's 8 to 12 core.[/citation]

You're either crazy or using info incorrectly.

40+core CPUs tested by Intel were not conventional desktop CPUs, but something very different. Each core had crap x86 performance. If you want something like them, have a look at Xeon Phi.

There's no good reason for affordable Sandy i7s to have so many cores and in fact there are six-core Sandy i7s anyway. The i7-3930K, 3960X, and 3970X are all six core CPUs using the Sandy Bridge micro-architecture.

Ivy will probably have six-core i7s later on and maybe also eight-core i7s, but expecting eight to twelve cores is beyond ridiculous. WTF are you going to do with them that you wouldn't be better off getting a Xeon for? So many cores would need low clock frequencies (the best twelve core model probably couldn't even break 2GHz by much without at least a 150W TDP, if even then) just to have practical power consumption and Intel would need to not use the crap paste used by the current Ivy CPUs or else they'd need to have ridiculously low frequencies to avoid overheating.

64 Ivy cores would need to be running at like 400-700MHz. They'd have extremely poor per core performance and would be more like a crap GPGPU than a CPU. It'd be stupid for most users. Heck, I don't even know of any consumer software that can effectively scale across 64 cores and considering Hyper-Threading, make that 64 dual-threaded cores.
 

__Miguel_

Distinguished
Jun 4, 2011
121
0
18,710
[citation][nom]agnickolov[/nom]You, and everybody here, are missing is quality journalism. The article should have explained that the so called 8-core CPU in reality has only 4 simultaneously active cores, the other 4 are always dormant / switched off. So as far as applications are concerned this is a quad-core CPU.[/citation]
Thank you for pointing that out for the other people!

People (and journalists) seem to forget that this 8-core CPU is actually more like a Tegra 3 (in terms of how it works) than a true 8-core beast (which doesn't even make sense on a mobile machine right now... I mean, there aren't that many pieces of software able to scale that high even on the desktop side (and most of those that can really scale well are clearly not mobile-oriented), let alone on a smartphone...

For those who missed the earlier explanation, this is not an 8-core CPU, it's a 4+4 one. It will have 4 high-performance cores, which will automatically shut down when not needed, giving way to 4 slower, less complex, power sipping cores, maybe even built on a less power-wasting node (that's the way the Tegra 3 works, 4 high-power cores and 1 low-power one, on a different node).

I'm not exactly sure how or why 4 low-power cores would be needed, though. With most UI being offloaded to the GPU, 2 cores would probably be enough for most mundane tasks, including simple daily apps... It would save some die area, after all...

Unless it's to keep the governor simpler, since the slower cores are actually basically identical to the more powerful ones, minus more complex instructions. With a 4+4 approach, the governor in charge of powering cores on and off could actually reside in the CPU, making the implementation easier and completely transparent to the OS...
 

ojas

Distinguished
Feb 25, 2011
2,924
0
20,810
[citation][nom]coder543[/nom]apparently people continue to speculate on this "eight" core phone.Do a little reading, eh? http://www.arm.com/products/proces [...] essing.php[/citation]
Hmmm...you know i think ARM has hit a wall of sorts, they can't keep power low enough anymore with all four cores, i think, which is why they're switching to this model.

2013: big.little vs Haswell

Popcorn time!
 

EvilMonk

Distinguished
Mar 10, 2010
51
0
18,630
[citation][nom]ojas[/nom]Hmmm...you know i think ARM has hit a wall of sorts, they can't keep power low enough anymore with all four cores, i think, which is why they're switching to this model.2013: big.little vs HaswellPopcorn time![/citation]
Well I don't think we can even compare big.LITTLE to Haswell as both are aimed at totally different markets. Haswell is Intel evolution of Ivy Bridge optimised for use in Ultrabook, not smartphone. The performances of Haswell will still be a lot better than big.LITTLE since they are both aimed at different markets. You will never be able to pack an Haswell chip into a smartphone.
 
[citation][nom]EvilMonk[/nom]Well I don't think we can even compare big.LITTLE to Haswell as both are aimed at totally different markets. Haswell is Intel evolution of Ivy Bridge optimised for use in Ultrabook, not smartphone. The performances of Haswell will still be a lot better than big.LITTLE since they are both aimed at different markets. You will never be able to pack an Haswell chip into a smartphone.[/citation]

You most certainly could pack a Haswell chip into a smart phone. A native single or dual core core model with it's cache made of say TRAM instead of SRAM and a very low frequency would be able to fit in a smart phone just fine and kick ARM around while doing so. It undoubtedly won't be done, but that doesn't mean that it can't be done.
 

suoeno

Distinguished
Mar 8, 2010
25
0
18,530
How about an octo-core 48hr battery to go with that? Then maybe... just maybe all of this will make sense.
 

Pherule

Distinguished
Aug 26, 2010
591
0
19,010
[citation][nom]blazorthon[/nom]40+core CPUs tested by Intel were not conventional desktop CPUs, but something very different. Each core had crap x86 performance. If you want something like them, have a look at Xeon Phi.[/citation]
They had crap x86 performance back when they were tested. It's been years since then. I think Intel can make a 20+ core CPU now with decent performance, don't you?

[citation][nom]blazorthon[/nom]There's no good reason for affordable Sandy i7s to have so many cores and in fact there are six-core Sandy i7s anyway. The i7-3930K, 3960X, and 3970X are all six core CPUs using the Sandy Bridge micro-architecture.[/citation]
From your point of view there is no reason. From mine, there is plenty.
i7's are not affordable to the average customer
6 core i7's are definitely not affordable to the average customer

[citation][nom]blazorthon[/nom]Ivy will probably have six-core i7s later on and maybe also eight-core i7s, but expecting eight to twelve cores is beyond ridiculous. WTF are you going to do with them that you wouldn't be better off getting a Xeon for?[/citation]
Xeon's are even more expensive than i7's. What would I need a 20+ core i7 for you ask? Fractal generation. Floating point calculations. Brute force work. I've done things that would require multiple supercomputers to complete. Obviously I was unable to complete them, but having a 64 core desktop machine would go a decent way further than your standard 4 core CPU.

[citation][nom]blazorthon[/nom]64 Ivy cores would need to be running at like 400-700MHz.[/citation]
Not if the CPU smartly throttles and controls whether individual cores are on or off. Need 4Ghz? Simple. Turn off 60 cores, leave 4 cores on. Need massive multiprocessing power? Enable all 64 cores and throttle them down to 1.7Ghz.

[citation][nom]blazorthon[/nom]Heck, I don't even know of any consumer software that can effectively scale across 64 cores and considering Hyper-Threading, make that 64 dual-threaded cores.[/citation]
Before dual cores were released, do you recall seeing software that scaled for more than one core? I don't think so. The software expands to utilize hardware's full power once the hardware has been released. If 64 core CPU's were to be released, software would soon be brought out to make use of them.

Hyper threading on so many cores is pointless.
 
[citation][nom]Pherule[/nom]They had crap x86 performance back when they were tested. It's been years since then. I think Intel can make a 20+ core CPU now with decent performance, don't you?From your point of view there is no reason. From mine, there is plenty.i7's are not affordable to the average customer6 core i7's are definitely not affordable to the average customerXeon's are even more expensive than i7's. What would I need a 20+ core i7 for you ask? Fractal generation. Floating point calculations. Brute force work. I've done things that would require multiple supercomputers to complete. Obviously I was unable to complete them, but having a 64 core desktop machine would go a decent way further than your standard 4 core CPU.Not if the CPU smartly throttles and controls whether individual cores are on or off. Need 4Ghz? Simple. Turn off 60 cores, leave 4 cores on. Need massive multiprocessing power? Enable all 64 cores and throttle them down to 1.7Ghz.Before dual cores were released, do you recall seeing software that scaled for more than one core? I don't think so. The software expands to utilize hardware's full power once the hardware has been released. If 64 core CPU's were to be released, software would soon be brought out to make use of them.Hyper threading on so many cores is pointless.[/citation]

No, those high-core count CPUs still have crap x86 performance per core even today. They are not like conventional x86 CPUs.

A 20+ core count CPU that is using a standard architecture and such would still need to run at a very low frequency. Even Intel's ten-core Ivy Bridge P models don't break 2GHz IIRC. A twenty core model would be closer to 1GHz (assuming that you don't increase L3 cache capacity, that'd mean even lower frequency to not have ridiculous power consumption) and a 64 core model wouldn't be likely to break 500MHz by too much with all cores enabled.

These dies would be incredibly huge and expensive to make. They'd be very impractical for a CPU and would have huge prices to make up for the huge cost of manufacturing. Such huge dies would also mean that few would pass binning well at all, exponentially increasing cost and limiting them in performance even more.

Think about it. The die used in the ten-core Xeons is already huge for a CPU and in the range of large GPU die sizes. One with more than six times more of the same cores might not even be possible to manufacture with current technology, let alone practical, especially if you consider the increased L3 cache needs of that many cores.

Even today, most software can't scale too well across more than four to eight cores, especially consumer software, much of which is still only lightly threaded. About a decade after the first dual-core/dual-thread CPUs, we still have a lot of single threaded or nearly single threaded software, granted this is improving. Much consumer software can't even be that parallelized.

Hyper-Threading on so many cores is not pointless. In fact, it would be all the more important ofr your reasoning for such CPUs.

Throttling more than 90% of the die down is a huge waste for any time when it is doing such throttling and it's probably not even possible to make such a chip that can run at a decent frequency regardless of how many cores are in use while having that much area. It'd need GPU-like design tools and that limits it to around 1-2GHz for the most part.

Furthermore, the workloads that you mentioned are all much better done on a GPU or a Xeon Phi anyway. They don't make sense to be done on CPUs when there are far superior options for such work available.
 
G

Guest

Guest
Linux runs the supercomputers and they have much more than 64 cores, so multicore support should not be a problem for tablet/phone OSs, the main problem with multicore CPUs is the interconnect between the CPUs!
 
[citation][nom]LinuxManyCPUsNoProblem[/nom]Linux runs the supercomputers and they have much more than 64 cores, so multicore support should not be a problem for tablet/phone OSs, the main problem with multicore CPUs is the interconnect between the CPUs![/citation]

Smart phones and tablets generally aren't being used for super computer work. Most apps are single threaded with some dual-threaded and the rare app that goes beyond that. Furthermore, the interconnect between cores is not an issue for Intel nor any of the smart phone and tablet CPUs at all. They don't have multiple CPUs, just multiple CPU cores in the same chip.

Super computers can have issues with their interconnects, but they're using a huge amount of discrete hardware, not a single integrated SoC.
 

ojas

Distinguished
Feb 25, 2011
2,924
0
20,810
[citation][nom]EvilMonk[/nom]Well I don't think we can even compare big.LITTLE to Haswell as both are aimed at totally different markets. Haswell is Intel evolution of Ivy Bridge optimised for use in Ultrabook, not smartphone. The performances of Haswell will still be a lot better than big.LITTLE since they are both aimed at different markets. You will never be able to pack an Haswell chip into a smartphone.[/citation]
Haswell's an architecture implementation, you can take it wherever you like. They'll implement it in their Atom SoCs, for example look at the Valleyview-T chips coming up. Four cores, 22nm, intel IGP. Most likely Haswell chips. Anand did a nice article on this about a month or so ago, after the IDF.
Here:
http://www.anandtech.com/show/6355/intels-haswell-architecture
 

billyboy999

Distinguished
Apr 13, 2011
34
0
18,530
[citation][nom]Pherule[/nom]Not if the CPU smartly throttles and controls whether individual cores are on or off. Need 4Ghz? Simple. Turn off 60 cores, leave 4 cores on. Need massive multiprocessing power? Enable all 64 cores and throttle them down to 1.7Ghz.[/citation]

Relating this back to the article... you are correct, but what big.little is taking this way further. What they are trying to take advantage of is, even if you scale back the frequency on the more powerful cores, you can't be as efficient as the lower power cores at their level of performance. Consider this (I am making up numbers here): If I want 1GFlop of performance, I can use the powerful CPU for 5W of power. However, if I only need 100MFlops, I can use the lower power cores at 1W. The reason is because even if I scaled back frequency of the powerful CPU until it only cranks out 100MFlops, it might still consume 2W.

This concept isn't new, btw. There was a macbook at some point that had two GPUs... a powerful one and a less powerful one.
 

Pherule

Distinguished
Aug 26, 2010
591
0
19,010
[citation][nom]blazorthon[/nom]No, those high-core count CPUs still have crap x86 performance per core even today. They are not like conventional x86 CPUs.[/citation]We don't know that. Neither of us knows exactly what is going on inside Intel's deepest labs today.

[citation][nom]blazorthon[/nom]
A 20+ core count CPU that is using a standard architecture and such would still need to run at a very low frequency. Even Intel's ten-core Ivy Bridge P models don't break 2GHz IIRC. A twenty core model would be closer to 1GHz (assuming that you don't increase L3 cache capacity, that'd mean even lower frequency to not have ridiculous power consumption) and a 64 core model wouldn't be likely to break 500MHz by too much with all cores enabled.[/citation]Afaik, Ivy Bridge P still uses 32nm. Scale that down to 22nm or 14nm. Swap back the decent cooling material Sandy Bridge had. Put the maximum power draw to 150W, and I'm sure you could easily reach 2+ Ghz on 20+ cores. Don't forget that Xeon processors are designed for longetivity, not raw speed like an OC'd gaming machine might have. I'm sure that an unlocked Xeon CPU (if such a thing existed) could be substantially overclocked, even on air cooling.

[citation][nom]blazorthon[/nom]These dies would be incredibly huge and expensive to make.[/citation]Size does not matter, as long as it can fit on the motherboard. Expense is negated by mass production.

[citation][nom]blazorthon[/nom]Such huge dies would also mean that few would pass binning well at all, exponentially increasing cost and limiting them in performance even more.[/citation]How do you figure that? If some cores are inoperable, Intel does what they have always done. Shut off the non-working cores and sell the CPU at a lower price, advertised with less cores.

[citation][nom]blazorthon[/nom]Think about it. The die used in the ten-core Xeons is already huge for a CPU and in the range of large GPU die sizes. One with more than six times more of the same cores might not even be possible to manufacture with current technology, let alone practical, especially if you consider the increased L3 cache needs of that many cores.[/citation]That's assuming Intel isn't holding back, which I very much think they are.

[citation][nom]blazorthon[/nom]Even today, most software can't scale too well across more than four to eight cores, especially consumer software, much of which is still only lightly threaded. About a decade after the first dual-core/dual-thread CPUs, we still have a lot of single threaded or nearly single threaded software, granted this is improving. Much consumer software can't even be that parallelized.[/citation]Once you go multi-process software, it doesn't matter if you have 2 cores or 20 cores. The software should easily scale across all of them. Take an example: Google Chrome uses multi-processing, putting each tab in its own process. Open 20 tabs in Chrome and you can already potentially use all 20 cores.

[citation][nom]blazorthon[/nom]Hyper-Threading on so many cores is not pointless. In fact, it would be all the more important ofr your reasoning for such CPUs.[/citation]Hyper-Threading provides the most benefit on a single core CPU. I've done multiple tests on an i3 with hyper-threading enabled and disabled. The largest difference was perhaps 5%. That's a dual core. I'd imagine the more cores you bring to the playfield, the greater the diminishing returns hyper-threading would have. It is my belief that hyper-threading is near useless. That's why I bought an i5 instead of an i7. If I had've bought an i7, I would have disabled the hyper-threading.

[citation][nom]blazorthon[/nom]Throttling more than 90% of the die down is a huge waste for any time when it is doing such throttling[/citation]Again, mass production negates the theory of waste.

[citation][nom]blazorthon[/nom]Furthermore, the workloads that you mentioned are all much better done on a GPU or a Xeon Phi anyway. They don't make sense to be done on CPUs when there are far superior options for such work available.[/citation]What workloads? Floating point operations? Current CPU's are not optimized like GPU's are, but they should be!

-

[citation][nom]MultiCoreRISC[/nom]Pherule do not forget there is a RISC multicore fully funded kickstarter project for a much less cost than Intel's zeon phihttp://www.kickstarter.com/project [...] r-everyone[/citation]Interesting read, but can they compete with Intel? I would imagine they'd have a long way to go before they could reach that level.

-

[citation][nom]billyboy999[/nom]...
The reason is because even if I scaled back frequency of the powerful CPU until it only cranks out 100MFlops, it might still consume 2W. [/citation]But the lower power CPU would ONLY be capable of handling 100MFlops. Even if the higher power CPU may draws slightly more power at lower levels, fact remains, when you need that 1GFlop, you can have it, and that's what's important. The same can't be said for the lower power CPU.
 
[citation][nom]Pherule[/nom]We don't know that. Neither of us knows exactly what is going on inside Intel's deepest labs today.Afaik, Ivy Bridge P still uses 32nm. Scale that down to 22nm or 14nm. Swap back the decent cooling material Sandy Bridge had. Put the maximum power draw to 150W, and I'm sure you could easily reach 2+ Ghz on 20+ cores. Don't forget that Xeon processors are designed for longetivity, not raw speed like an OC'd gaming machine might have. I'm sure that an unlocked Xeon CPU (if such a thing existed) could be substantially overclocked, even on air cooling.Size does not matter, as long as it can fit on the motherboard. Expense is negated by mass production.How do you figure that? If some cores are inoperable, Intel does what they have always done. Shut off the non-working cores and sell the CPU at a lower price, advertised with less cores.That's assuming Intel isn't holding back, which I very much think they are.Once you go multi-process software, it doesn't matter if you have 2 cores or 20 cores. The software should easily scale across all of them. Take an example: Google Chrome uses multi-processing, putting each tab in its own process. Open 20 tabs in Chrome and you can already potentially use all 20 cores.Hyper-Threading provides the most benefit on a single core CPU. I've done multiple tests on an i3 with hyper-threading enabled and disabled. The largest difference was perhaps 5%. That's a dual core. I'd imagine the more cores you bring to the playfield, the greater the diminishing returns hyper-threading would have. It is my belief that hyper-threading is near useless. That's why I bought an i5 instead of an i7. If I had've bought an i7, I would have disabled the hyper-threading.Again, mass production negates the theory of waste. What workloads? Floating point operations? Current CPU's are not optimized like GPU's are, but they should be!-Interesting read, but can they compete with Intel? I would imagine they'd have a long way to go before they could reach that level.-But the lower power CPU would ONLY be capable of handling 100MFlops. Even if the higher power CPU may draws slightly more power at lower levels, fact remains, when you need that 1GFlop, you can have it, and that's what's important. The same can't be said for the lower power CPU.[/citation]

It'd be so big that it CAN'T be mass produced cheaply! There is a limit to how big a die can be before mass-production becomes infeasible and that'd be near your twenty core model. Ivy Bridge P is not 32nm unless it is on 32nm because it couldn't be done on 22nm with current technology. On that same note, mass production wouldn't stop any idea of waste even if it could be done. When you're wasting more than 90% of all produced die area, you're wasting money to an extreme. AMD wastes over 30% of every die with their FX-41xx CPUs and that was barely profitable. Shutting off nearly the entire die and selling as a much smaller number part isn't going to be profitable. It'd be selling at an extreme loss. Furthermore, binning wouldn't just be bad in the sense of faults. The entire die would have inconsistent silicon quality due to its great size and the frequencies would all be limited by the weakest links in the CPU's silicon...

Google Chrome does not put every single tab in its own process. It puts tab groups in single processes. Yeah, I've checked that. Furthermore, going multi-process and such has disadvantages. It means more memory unless you have master processes to manage it all and that then brings scaling in performance down.

You're tests are irrelevant because you don't understand what Hyper-Threading does. It doesn't work perfectly with all workloads. Those that can take advantage of it can have up to about 30% performance improvements on average regardless of the CPU's core count. You didn't see much difference because the i3s are already so fast that Hyper-Threading is not only not given a chance to be used, but some of the workload is also potentially being passed off to the graphics as well.

Your belief on Hyper-Threading is irrelevant because you don't understand it. Again, mass-production does not get rid of waste. Re-using damaged dies has a huge profitability drop. They are only used as a stop-gap from an entire loss on the die and do not make much profit at all. That is part of why Intel and AMD usually use multiple die masks for different performance levels. For example, Ivy Bridge on the consumer side alone has four or five different masks. Going into GPUs, we see that AMD has three GCN masks and two VLIW5 masks that are still in use in addition to one or two Trinity masks that have VILW4 GPUs. Nvidia has even more.

Current CPUs aren't optimized like GPUs are because that'd take huge die area and would limit the frequency to less than ~2GHz due to the nature of GPU optimization.

Tell me, what tests have you done on Hyper-Threading? To get mere 5% gains implies that you're doing something wrong. Intel still uses Hyper-Threading because it works, not because it's a gimmick. I know this from my own tests, experience, and because I rely on it in some of my work.

Basic summary: Such a large CPU die is not feasible. The only reason that GPUs can get so large is because they are designed optimally for low power consumption, but those same optimizations limit their frequency because it arranges components to save power at the cost of maximum performance. You couldn't just thorttle 60 cores and Turbo four of them to 4GHz. They wouldn't even work at such a high frequency and even if they did, the front end wouldn't be able to feed them and performance would still be bad because of it.

Also, I don't need to be in Intel's labs to say what I've said because THEY'VE TOLD US. The products with many cores use very basic architectures such as Xeon Phi. Even if they hadn't, I'm not so unfamiliar with the topic to not realize the limiting factors. To have so many cores, they use very simple cores, similar to a GPU, but not quite as simple. This isn't an everything is possible world. There are limits to what can be done with current technology and technology of the next several years. What you ask for is not feasible and it wouldn't even have much well-supporting software until five to twenty years from now anyway, if even then.

Intel isn't milking us anyway. They're simply trying to not surpass AMD by so much that they get attacked with anti-trust lawsuits from around the world for being a monopoly. That's a great part of why they have been focusing on power consumption instead of performance lately, granted it's not the only reason.
 

Pherule

Distinguished
Aug 26, 2010
591
0
19,010
Mass production can make just about anything cheap. Yes, including a 20 core CPU.

Intel's new extreme range classed under Ivy Bridge is still 32nm

Google Chrome not only puts tabs (or tab groups) in separate processes, but extensions and plugins as well. Yes it scales more memory, but that's Chrome. Other software won't neccesarily have that issue, because they'll likely be number crunching tools, not content generation (a browser)

[citation][nom]blazorthon[/nom]You're tests are irrelevant[/citation]It's your, not you're - and my tests are relevant. I have proven to myself that hyper-threading on a CPU that isn't single core is near useless; I don't need to prove it to you as well.

[citation][nom]blazorthon[/nom]Intel isn't milking us anyway. They're simply trying to not surpass AMD by so much that they get attacked with anti-trust lawsuits from around the world for being a monopoly. That's a great part of why they have been focusing on power consumption instead of performance lately, granted it's not the only reason.[/citation]You see, this is the problem. They're expanding in one direction instead of in both directions. They should be improving power consumption without slacking off on performance increases.

What good is a 35W CPU in 2014 that is 30% more powerful than a 95W CPU from 2010, assuming both are top of the range at their respective times? Give the 2014 CPU the ability to draw full 125W, increase core count and performance accordingly, and it will be a reasonable upgrade from current processors.

Do you think I want to be sitting on a 10W quad core CPU in the year 2025? Hell no, I'd expect something more like a 30 - 120 core 95W or 125W CPU.
 
[citation][nom]Pherule[/nom]Mass production can make just about anything cheap. Yes, including a 20 core CPU.Intel's new extreme range classed under Ivy Bridge is still 32nmGoogle Chrome not only puts tabs (or tab groups) in separate processes, but extensions and plugins as well. Yes it scales more memory, but that's Chrome. Other software won't neccesarily have that issue, because they'll likely be number crunching tools, not content generation (a browser)It's your, not you're - and my tests are relevant. I have proven to myself that hyper-threading on a CPU that isn't single core is near useless; I don't need to prove it to you as well.You see, this is the problem. They're expanding in one direction instead of in both directions. They should be improving power consumption without slacking off on performance increases.What good is a 35W CPU in 2014 that is 30% more powerful than a 95W CPU from 2010, assuming both are top of the range at their respective times? Give the 2014 CPU the ability to draw full 125W, increase core count and performance accordingly, and it will be a reasonable upgrade from current processors.Do you think I want to be sitting on a 10W quad core CPU in the year 2025? Hell no, I'd expect something more like a 30 - 120 core 95W or 125W CPU.[/citation]

Mass production can't make something cheap when it can't be mass-produced.

What you want doesn't matter to Intel because Intel doesn't want to drive themselves out of business to cater to one customer.

Other software that works the same method of multi-threading as Chrome will have the same weaknesses. It takes more memory because necessary instructions and/or data isn't being shared, so each process must replicate a lot of instructions and/or data. The concept of using master processes to manage such data is how you can alleviate this issue, but again, it comes at the cost of performance scaling on current CPUs. This can be optimized for in hardware, but that is unlikely to be done except on platforms that have both the software and the hardware managed by the same company or at least by companies that are working together very closely.

What you want goes against what most of the industry wants and isn't even reasonably possible. I've already explained why many cores like that are both unfeasible and unreasonable, so I won't go into that again. I'll add to it that the general industry wants lower power consumption. This is a huge drive in the computer industries and companies are often given incentives to follow through with it.

By 2025, we'd probably be on six/eight core CPUs being common in consumer desktops and laptops, assuming that they're still popular devices at the time, not still just quad core parts.

Power consumption is unlikely to go down to the point of a 35W CPU being a top of the range Intel model in a desktop. That's more likely for a mobile part.

Furthermore, even Sandy Bridge CPUs are generally around 20-40% more powerful than 2010 Westmere counterparts at the 95W TDP and consume less power, so you've managed to exaggerate every one of your points to ridiculous and inaccurate extremes.

If core count is so important, then why is it that Intel's quad-core i7s still generally beat AMD's eight-core FX CPUs? Even better, if Hyper-Threading is so unimportant, then why is it that the quad-core i5s are beaten by all of AMD's eight-core models, except maybe the FX-8100, in highly threaded performance, yet the i7s which are nearly identical to the i5s except for Hyper-Threading are able to beat those same eight-core CPUs from AMD? Hyper-Threading takes i5s that lost by good margins into i7s that win by good margins. Heck, look into Tom's recent FX-8350 review for very thorough proof of it if you don't believe me. Any other review site will say the same thing. There are no exceptions. For work that can use enough threads, Hyper-Threading can provide a 10% to over 30% performance improvement regardless of core count.
 

ojas

Distinguished
Feb 25, 2011
2,924
0
20,810
If core count is so important, then why is it that Intel's quad-core i7s still generally beat AMD's eight-core FX CPUs? Even better, if Hyper-Threading is so unimportant, then why is it that the quad-core i5s are beaten by all of AMD's eight-core models, except maybe the FX-8100, in highly threaded performance, yet the i7s which are nearly identical to the i5s except for Hyper-Threading are able to beat those same eight-core CPUs from AMD? Hyper-Threading takes i5s that lost by good margins into i7s that win by good margins. Heck, look into Tom's recent FX-8350 review for very thorough proof of it if you don't believe me. Any other review site will say the same thing. There are no exceptions. For work that can use enough threads, Hyper-Threading can provide a 10% to over 30% performance improvement regardless of core count.
Just to add a bit of data here, the i7s do have a larger L3 cache, so i guess that does help their case too.
Come to think about it, i don't know when intel hasn't changed thread count and LLC size together.

Anyway, @Pherule, blazorthon is generally right here about HT, it's not going to make up for a core (which is why a Xeon>i7E>i7>i5>i3>Pentium given a task that can utilize all threads) but it does help to an extent, 10-30% should be pretty accurate.

p.s. i'd rather take a 10W quad core that performs as well as a 100W quad core. opens up tremendous possibilities, cuts energy use, more green. Plus smaller dies use less resources to make (i think, seems to be the intuitive conclusion) so that's good for the environment too. What we do with all the then would be decommissioned 100W quad cores is, however, another possibly bigger problem...
 
[citation][nom]ojas[/nom]Just to add a bit of data here, the i7s do have a larger L3 cache, so i guess that does help their case too.Come to think about it, i don't know when intel hasn't changed thread count and LLC size together.Anyway, @Pherule, blazorthon is generally right here about HT, it's not going to make up for a core (which is why a Xeon>i7E>i7>i5>i3>Pentium given a task that can utilize all threads) but it does help to an extent, 10-30% should be pretty accurate.p.s. i'd rather take a 10W quad core that performs as well as a 100W quad core. opens up tremendous possibilities, cuts energy use, more green. Plus smaller dies use less resources to make (i think, seems to be the intuitive conclusion) so that's good for the environment too. What we do with all the then would be decommissioned 100W quad cores is, however, another possibly bigger problem...[/citation]

The cache is shown to make a nearly zero difference by comparing them to the LGA 2011 i7s which have much more cache that still makes almost no difference even in stuff such as gaming which is notorious for loving cache performance and capacity. Going from a 6MiB L3 to an 8MiB L3 simply didn't change much. The cache argument might be much more true for the i3s versus the i5s, but the i5s and i7s simply aren't separated by it. Even the minor frequency difference tends to be a larger performance difference and it's still nearly nothing as proven by i7s only actually beating i5s in work that can use more than four threads effectively.
 

ojas

Distinguished
Feb 25, 2011
2,924
0
20,810
[citation][nom]blazorthon[/nom]The cache is shown to make a nearly zero difference by comparing them to the LGA 2011 i7s which have much more cache that still makes almost no difference even in stuff such as gaming which is notorious for loving cache performance and capacity. Going from a 6MiB L3 to an 8MiB L3 simply didn't change much. The cache argument might be much more true for the i3s versus the i5s, but the i5s and i7s simply aren't separated by it. Even the minor frequency difference tends to be a larger performance difference and it's still nearly nothing as proven by i7s only actually beating i5s in work that can use more than four threads effectively.[/citation]
Hmmm... aren't cache blocks tied to particular cores (why do they say cache per core)? If that's true, then if the cores aren't being used, the cache wouldn't be used either, right?
 
Status
Not open for further replies.