Intel’s Comet Lake-S Could Push Even More People Toward AMD

Aaron44126

Reputable
Aug 28, 2019
20
19
4,515
I know it is early, but what do we know about the Comet Lake successor, Rocket Lake? Isn't this also supposedly a 14nm CPU (with 10nm graphics attached)? I wonder how long Intel will drag this on for.

I'm interested in upgrading soon but I deal with workstation-class laptops (i.e. Dell Precision) where AMD doesn't really have a presence. I sort of hate to buy a new system which is fifth-iteration Skylake or whatever, planning to keep it as my daily driver for several years, only to have Intel turn around and finally release something new one generation later.
 

Lex-Corp

Commendable
Jan 11, 2017
81
0
1,640
Hi there

I think now Intel's next mainstream processors won't be gaming I am pretty sure it will be all business class profile cpu's with those cores so I think Now AMD will be the next processors for gaming.
 
Feb 14, 2019
40
12
35
Disappointed again!!, they have dragged out the 14++ process for way to long. Over 3 yrs late and still not providing updated arch/process which I had been waiting for. They create a new socket for basically nothing above the previous processors. I guess not enough folks got the boot for this.
 

salgado18

Distinguished
Feb 12, 2007
928
373
19,370
They are working* on a brand new architecture since 2017, when Zen 1 caught them off-guard. But it's going to take some time to roll out, so they are just buying some time until then. Maybe by 2021 we see a revolutionary new Intel processor? How will it fare against Zen 3 and 4 by then?

*my opinion, I have no facts to back that claim. But it does seem logical.
 
Feb 14, 2019
40
12
35
They are working* on a brand new architecture since 2017, when Zen 1 caught them off-guard. But it's going to take some time to roll out, so they are just buying some time until then. Maybe by 2021 we see a revolutionary new Intel processor? How will it fare against Zen 3 and 4 by then?

*my opinion, I have no facts to back that claim. But it does seem logical.
Most likely have been "working" on new Arch., since the last Arch. update. Once you finish and Arch. you start looking into a new/updated one. But, it's been much longer than normal, probably due to all the security issues.
 

kinggremlin

Distinguished
Jul 14, 2009
574
41
19,010
Most likely have been "working" on new Arch., since the last Arch. update. Once you finish and Arch. you start looking into a new/updated one. But, it's been much longer than normal, probably due to all the security issues.
The architecture is delayed because the 10nm process is was designed for has been delayed. Once it was obvious that 10nm was going to be years late, the architecture got stuck in limbo because it isn't possible to just backport it to 14nm.
 
I'm interested in upgrading soon but I deal with workstation-class laptops (i.e. Dell Precision) where AMD doesn't really have a presence. I sort of hate to buy a new system which is fifth-iteration Skylake or whatever, planning to keep it as my daily driver for several years, only to have Intel turn around and finally release something new one generation later.
That's what threadripper is about, I think. You can get a nimble 16-core chip today or a quirky 32-core, and then replace it with a Zen 2 when it's out...
 

bit_user

Polypheme
Ambassador
If Intel is going to add 25% more cores and (presumably) increase clock speeds to some extent while sticking to its well-worn 14nm process node, that reported 125W TDP rating is likely conservative in the extreme. After all, at stock speeds, we saw the “95W” 9900K consume upwards of 145W at stock settings when running Blender.
Nobody has an excuse to keep writing such ignorance. Read this.

https://www.anandtech.com/show/13544/why-intel-processors-draw-more-power-than-expected-tdp-turbo

If the motherboard he used would follow Intel's guidelines for turbo boost parameters, then he wouldn't have registered 145 W in that benchmark. It would've settled around (probably just above) the specified 95 W.

Granted, TDP ratings are a measurement of required heat dissipation at base frequencies, not power consumption. But if the 8-core 9900K is any indication, its 10-core replacement running on effectively the same architecture and process node is going to need some serious cooling, even at stock settings.
Again, more ignorance. 10 cores is not inherently hotter than 8. Not if you put the base clocks lower. The higher TDP is just Intel trying to give the CPU more room to stretch its legs. But, they have server CPUs with even more cores and even lower TDP - they just clock them lower.
 
That's what threadripper is about, I think. You can get a nimble 16-core chip today or a quirky 32-core, and then replace it with a Zen 2 when it's out...

uh is it even possible to put a 16c 32t cpu into a relatively light and compact laptop and have it run at over 3.5ghz all core?
the i7 8850H with its 6c 12t is only at 2.6ghz base and one could wonder for how long it could operate at 4.3ghz
its pretty much the same problem most high end gaming laptop have, lots of potential power but nowhere to get rid off all the heat which is why we see desktops outperforming much more expensive laptops
 

d3bug

Commendable
Aug 29, 2019
9
5
1,515
I know it is early, but what do we know about the Comet Lake successor, Rocket Lake? Isn't this also supposedly a 14nm CPU (with 10nm graphics attached)? I wonder how long Intel will drag this on for.

I'm interested in upgrading soon but I deal with workstation-class laptops (i.e. Dell Precision) where AMD doesn't really have a presence. I sort of hate to buy a new system which is fifth-iteration Skylake or whatever, planning to keep it as my daily driver for several years, only to have Intel turn around and finally release something new one generation later.

What about the DELL Lat. 5495? Ryzen PRO
That is an AMD based DELL Workstation Class Laptop.

As for Desktops (Workstation): DELL Optiplex 5055 MT/SFF is available.

Those seem like viable competitive options to me. Now, granted, there is nowhere near the selection at present in the market because Intel has had a stranglehold in the business market for decades. I think that is likely to change soon thanks to AMD's now superior performance/price in both consumer & business markets. Where they are really shining now is the EPYC Line in particular.
 
"Of course, not everything is Zen in the world of Ryzen. Shortages of AMD’s high-end parts persist nearly two months after their launch, our testing has shown that not all Ryzen 3000 cores can hit the top advertised speed of the CPU, and almost all X570 motherboards require active cooling -- something that hasn’t been common since the days of dedicated northbridge and southbridge chips. "

First off, the 3700x was not in shortage... and the 3900x is popular...? WHO WOULD HAVE GUESSED!!!?

Second off, HardwareUnboxed PROVED that your motherboard was the BOTTLENECK! You have some guts to still hanging toward this misinformation.

View: https://www.youtube.com/watch?v=o2SzF3IiMaE
 
  • Like
Reactions: bit_user

d3bug

Commendable
Aug 29, 2019
9
5
1,515
If you want to know more about how Intel takes unfair advantage and cripples performance of non-Intel CPUs... have a look HERE

Overriding the Intel CPU detection function
There are two versions of Intel's CPU detection function, one that discriminates between CPU brands, and one that does not. The undocumented Intel library function
__intel_cpu_features_init() sets the 64 bit integer variable __intel_cpu_feature_indicator where each bit indicates a specific CPU feature on Intel CPU’s. Another function __intel_cpu_features_init_x() does the same without discriminating between CPU brands, and similarly sets the variable __intel_cpu_feature_indicator_x. You can bypass the check for CPU brand simply by setting both of these variables to zero and then call __intel_cpu_features_init_x(). Some functions may use an older version __intel_cpu_indicator_init() which sets the 32-bit variable __intel_cpu_indicator. This function favors Intel CPUs as well with no fair alternative.

Code:
// Example 13.3. Override Intel CPU dispatching
#include <stdint.h>

extern "C" {

// link to dispatcher in library libirc.lib or libirc.a

void __intel_cpu_features_init();
void __intel_cpu_features_init_x();
extern uint64_t __intel_cpu_feature_indicator;
extern uint64_t __intel_cpu_feature_indicator_x;
}

// call this function in the beginning of your program:

void dispatch_patch() {

// replace the Intel-only dispatcher with the fair dispatcher

__intel_cpu_feature_indicator = 0;
__intel_cpu_feature_indicator_x = 0;
__intel_cpu_features_init_x(); // call fair dispatcher
__intel_cpu_feature_indicator = __intel_cpu_feature_indicator_x;
 
  • Like
Reactions: bit_user

BulkZerker

Distinguished
Apr 19, 2010
846
8
18,995
Nobody has an excuse to keep writing such ignorance. Read this.

https://www.anandtech.com/show/13544/why-intel-processors-draw-more-power-than-expected-tdp-turbo

If the motherboard he used would follow Intel's guidelines for turbo boost parameters, then he wouldn't have registered 145 W in that benchmark. It would've settled around (probably just above) the specified 95 W.


Again, more ignorance. 10 cores is not inherently hotter than 8. Not if you put the base clocks lower. The higher TDP is just Intel trying to give the CPU more room to stretch its legs. But, they have server CPUs with even more cores and even lower TDP - they just clock them lower.
Nobody has an excuse to keep writing such ignorance. Read this.

https://www.anandtech.com/show/13544/why-intel-processors-draw-more-power-than-expected-tdp-turbo

If the motherboard he used would follow Intel's guidelines for turbo boost parameters, then he wouldn't have registered 145 W in that benchmark. It would've settled around (probably just above) the specified 95 W.


Again, more ignorance. 10 cores is not inherently hotter than 8. Not if you put the base clocks lower. The higher TDP is just Intel trying to give the CPU more room to stretch its legs. But, they have server CPUs with even more cores and even lower TDP - they just clock them lower.


The damage control is amazing.

  1. Intel has not logged a single finger to disuade any motherboard maker to stop the practice of aggressive boost clocks.
  2. Somehow, all of them have a very similar performance, almost as if they are following a spec. (Them trying to pull a fast one by adjusting bClk nonwithstanding)

Its been known for a while the TDP for Intel is at base clock.
 

bit_user

Polypheme
Ambassador
Um, why the double-quote?

The damage control is amazing.
  1. Intel has not logged a single finger to disuade any motherboard maker to stop the practice of aggressive boost clocks.
  2. Somehow, all of them have a very similar performance, almost as if they are following a spec. (Them trying to pull a fast one by adjusting bClk nonwithstanding)
Its been known for a while the TDP for Intel is at base clock.
I'm not trying to defend Intel of mischaracterizing the power/performance of their CPUs. They definitely need to communicate this better.

What I am trying to do is alert people to the underlying reasons behind this discrepancy. Mr. Safford does his readers a disservice by over-simplifying the issue and not providing or directing people to the real explanation of what's going on.

As for the similarity of results across motherboards, that's because many are running with "unlimited turbo". Try actually reading the article, and you'll understand.

However, try putting one of these CPUs in a workstation motherboard and you'll see it respect its TDP, under sustained loads. Likewise, I expect systems from large OEMs, like Dell and HP to behave closer to Intel's recommendations.

By not acknowledging the role of the motherboard in this situation, you're failing to equip users of such systems with the knowledge that the motherboard might be holding them back. And user empowerment is really what I care about. Really, what good can come of these forums if not education?
 
Aug 29, 2019
35
7
35
Intel's new Comet Lake-S chips appear to be yet another disappointing update that will push users to AMD's Ryzen 3000-series chips.

Intel’s Comet Lake-S Could Push Even More People Toward AMD : Read more

One thing I can not understand is the desired for more core, I think manufactures should increase the capabilities of the core inside instead of just applying a band aid with more core instead new abilities.

More cores just seems to me a cheap way out of the real problem.
 
  • Like
Reactions: bit_user
Aug 29, 2019
35
7
35
?

The guy says he needs a laptop, and you recommend threadripper?


It just shows how people are blind about the need to adding more cores is the answer - they over look what users need - not all people need multiple cores and certainly not high end GPU's = especially if only doing word processing and spreadsheets.
 

d3bug

Commendable
Aug 29, 2019
9
5
1,515
One thing I can not understand is the desired for more core, I think manufactures should increase the capabilities of the core inside instead of just applying a band aid with more core instead new abilities.

More cores just seems to me a cheap way out of the real problem.
The "real problem" is CPUs are used nowhere near to capacity from a software standpoint. We just keep adding more cores, more speed, more RAM, more X,Y,Z... instead of coding properly. Additionally, Intel purposely cripples compilers for non-Intel CPUs to use non-optimal CPU Dispatching (non-optimal code) if it discovers it's a non-Intel CPU. So what you end up with is a performance disparity between Intel & non-Intel CPUs that should perform almost identically. THAT is the real problem.
 
  • Like
Reactions: bit_user

GetSmart

Commendable
Jun 17, 2019
173
44
1,610
The "real problem" is CPUs are used nowhere near to capacity from a software standpoint. We just keep adding more cores, more speed, more RAM, more X,Y,Z... instead of coding properly. Additionally, Intel purposely cripples compilers for non-Intel CPUs to use non-optimal CPU Dispatching (non-optimal code) if it discovers it's a non-Intel CPU. So what you end up with is a performance disparity between Intel & non-Intel CPUs that should perform almost identically. THAT is the real problem.
The problem with your argument is Intel's own compiler which is not that widely used in most software development. That is kinda beating a dead horse. Nowadays Intel's compiler is mostly used in datacenters and HPC environment (for servers and supercomputers). Actually the most widely used compiler is Microsoft's own compilers (for their Microsoft Visual Studio development platform). Heck besides Microsoft's own software, many games also uses Microsoft compilers. If you want to blame software preference then look at Microsoft was well.
 
Aug 29, 2019
35
7
35
The "real problem" is CPUs are used nowhere near to capacity from a software standpoint. We just keep adding more cores, more speed, more RAM, more X,Y,Z... instead of coding properly. Additionally, Intel purposely cripples compilers for non-Intel CPUs to use non-optimal CPU Dispatching (non-optimal code) if it discovers it's a non-Intel CPU. So what you end up with is a performance disparity between Intel & non-Intel CPUs that should perform almost identically. THAT is the real problem.

First of all just for clarity - we are only talking mostly about x86 based CPU or more specially AMD and Intel

As a developer including low level work, I would more ram and even more speed has made developer lazy, it use to be that code was hand optimized and one cared about memory optimization. But compiles have made developed lazy. But I don't see it specific to Intel is crippling non-Intel CPUs. In my opinion is more of a couple of things

  1. Application and compiles like Microsoft compile have common code for both processors.
  2. Any extension must be specifically added to compile and used by application
  3. Use of multiple threads and thus cores is complication by system designed - storage and video devices and other hardware are likely single threaded in designed
  4. There are of course other factors that make application slow for example storage, yes solid state has help.


I seriously think one is living a fantasy of two different arhitexztures perform identical -For example Skylake and SunnyCove performed different because logic units change in SunnyCove increase performance for same frequency. Similar Zen and other AMD have change because of optimization in Zen processors.

Even though it not bad idea of feeding more cores, I don't believe it ideal solution in long term. Most significant performance changes I believe will not happen until the IO is improved on system especially with storage. I personally would like to see more computing power directly aim at helping storage efforts similar to way we have with graphics now days