News Intel Rocket Lake Six-Core CPU Shows Off 4.2 GHz Boost Clock

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
The paradigm is changing in the Data Center - the dis aggregation of monolithic servers to pools of CPUs, pools of GPUs, pools of traditional DRAM and Non Volatile memory, pools of AI processors, pools of FPGAs - all connected with CXL over PCIe5 and 6. This trend started with SANs - storage was no longer housed in a server, but a standalone system (which was basically a server itself)
What are you talking about? Virtualization did the "dis aggregation" a long time ago. Storage is actually starting to go the opposite way with more and more companies going hyper-converged. When the biggest CPUs you could get were 8 cores, that made it harder to run VMware vSAN or Software Defined Storage like DataCore VSanSymphony. Now with the large RAM and CPU resources you can get on a host makes running those solutions very nice. You can use off the shelf enterprise SSD for your storage with either iSCSI of Fibre Channel backing and not have to worry about being vendor locked on your disk.
 
  • Like
Reactions: JarredWaltonGPU
Netburst's problem wasn't so much a "too much power" issue as it was a too many sacrifices for clocks one since Intel sacrificed 40-50% of its IPC relative to Coppermine/Tualatin to get there. Netburst needed to clock almost twice as high out of the gate to beat the P3 under all circumstances, kind of awkward when your 2GHz top-end new part barely beats your 1-1.3GHz previous-gen parts.

As for 10nm and beyond, hitting the physical node dimensions (or at least something close enough to call it as such) is only half the battle, still got to get the process to also yield the expected performance and quantities. As far as we can tell from available Icelake SKUs, 10nm+ does not appear to be there yet. Hopefully things will go more smoothly for 7nm in 2021.

I'm not expecting much out of 10nm. At this point, 10nm is mainly about Intel having to prove to investors that the billions it spent getting 10nm to work as intended as Intel told investors that 10nm was "on track" while delaying 10nm products due to setbacks for years in a row were not completely wasted.
It wasn't quite that bad. Certain tasks, yes, an overclocked 1.4GHz Tualatin P3 could beat a 1.4 GHz Willamette P4, but in other stuff the P4 was quite a bit faster. And once P4 hit 2+ GHz, it generally wasn't even close. You had to carefully cherry pick tests to get better results on P3. Plus,

P4 Willamette was the 'bad' first gen part, but Northwood was very good overally and rapidly scaled to 3GHz and more. That that point, it was faster than any of AMD's Athlon XP chips -- it was only Athlon 64 that actually took the lead, in part thanks to its integrated memory controller. And really, you needed the much more expensive socket 940 chips, or later socket 939 -- socket 754 was okay but didn't always match or beat P4 Northwood chips.

I had an overclocked Tualatin, incidentally, and later upgraded to a Pentium 4 rig. It was a very noticeable jump in overall performance. And at stock clocks, Tualatin for desktops topped out at 1.13GHz (which was why it was overclocked to 1.4GHz). Some of that was memory as well -- P4 could do quite nicely with the right memory setup. Rambus was technically faster, but the later chipsets with DDR support weren't bad.

The thing is, Willamette was first gen NetBurst, so you can sort of understand some of the mistakes that were made in retrospect. Northwood fixed a lot of those, but then Prescott went off a cliff and just couldn't scale to the frequencies and performance Intel wanted. Even worse, Tejas -- which actually was super close to a public release and had been sampling for many months to testers and other places -- didn't really help and had even worse heat and power characteristics. Which is why Intel pivoted, killed off P4, and came out with Core 2 Duo / Merom / Conroe (after the success of Yonah and the Core Solo/Duo chips).

Anyway, I'm very curious to hear what precisely Intel does with Rocket Lake to try and keep it relevant. PCIe Gen4 is not going to be nearly sufficient. Integrated Xe Graphics won't really matter either, since these are desktop chips that will just use a dedicated GPU. I have trouble imagining the Willow Cove architecture alone will be anywhere close to sufficient to compete against Zen 3 and maybe even Zen 4. It's all just a holding pattern and business as usual while Intel works to get out proper 10nm or 7nm desktop parts.
 
What are you talking about? Virtualization did the "dis aggregation" a long time ago. Storage is actually starting to go the opposite way with more and more companies going hyper-converged. When the biggest CPUs you could get were 8 cores, that made it harder to run VMware vSAN or Software Defined Storage like DataCore VSanSymphony. Now with the large RAM and CPU resources you can get on a host makes running those solutions very nice. You can use off the shelf enterprise SSD for your storage with either iSCSI of Fibre Channel backing and not have to worry about being vendor locked on your disk.
Don't feed the trolls. :)

I've lost count of the number of different places Deicidium "worked' over the years. And pointing at SGI as a way of doing things doesn't make sense, considering the company got steamrolled by competitors and never had a proper desktop solution. It sold insanely expensive workstations that could easily cost $50K or more, and then when that market started getting overtaken by $2000 Windows PCs it utterly failed to adapt.
 
  • Like
Reactions: jeremyj_83

InvalidError

Titan
Moderator
Anyway, I'm very curious to hear what precisely Intel does with Rocket Lake to try and keep it relevant. PCIe Gen4 is not going to be nearly sufficient. Integrated Xe Graphics won't really matter either, since these are desktop chips that will just use a dedicated GPU.
I'm not expecting anything beyond that and whatever hardware exploit mitigations got built into Willow Cove. Most of Willow's IPC gains will go into offsetting its lower sustainable clocks and that'll be about it.
 
AMD surpassed Intel on average in IPC with the 3000 series. Due to a significant clock advantage, which has nothing to do with IPC, Intel still maintains the per core performance title.
You guys should start linking the benchmarks you get your opinions from.
See, like that.The 10900k is "only" 3% faster than the 3900x that has 20% more cores.
https://www.phoronix.com/scan.php?page=article&item=intel-10500k-10900k&num=10
embed.php

Here is the geometric mean of all the CPU benchmarks carried out besides the gaming tests. The Core i9 10900K came out to being about 25% faster than the Core i9 9900KS but only 3% faster than the Ryzen 9 3900X. The Core i5 10600K meanwhile was striking mid-way between the Ryzen 5 3600X and Ryzen 7 3700X.

Due to a significant clock advantage, which has nothing to do with IPC
It's instructions...per clock(or cycles)
If you have more instructions you need less clocks to do the same amount of work.
If you have more clocks you need less instructions to do the same amount of work.

Both are equally important and whoever has the most of both is king in everything.
 
You guys should start linking the benchmarks you get your opinions from.
See, like that.The 10900k is "only" 3% faster than the 3900x that has 20% more cores.
https://www.phoronix.com/scan.php?page=article&item=intel-10500k-10900k&num=10

It's instructions...per clock(or cycles)
If you have more instructions you need less clocks to do the same amount of work.
If you have more clocks you need less instructions to do the same amount of work.

Both are equally important and whoever has the most of both is king in everything.
Not to get too bogged down in the weeds, but that geometric mean chart is ... not showing what I'd expect. It's not entirely clear where the numbers come from -- did the author include the performance per dollar results? And how did he adjust the "lower is better" scores to have them be correct for a geometric mean?

I took way too long typing in the numbers, but after stuffing everything into Excel for the 3950X, 3900X, 3700X, 10900K, and 9900K ... well, the numbers don't add up. If you just take the straight geometric mean of all non-gaming tests (the charts on pages 2-5), the numbers are:

3950X: 59.61
3900X: 57.85
3700X: 56.30
10900K: 57.59
9900K: 56.69

Two things to note: one is that the numbers are WAY CLOSER than shown in the full geomean chart on page 10. Second is that this is an incorrect method of taking the geomean as the "lower is better" scores need to be inverted (for the geomean to make sense, all numbers have to be "higher is better" or "lower is better" -- not a mix). So I did that, multiplying the result by 10 just to make the 3950X result close to what was shown. That gives the following:

3950X: 38.23
3900X: 37.10
3700X: 36.11
10900K: 36.93
9900K: 36.57

So, that still doesn't explain what data is being used for the "Geometric Mean Of All Test Results" -- because by my numbers, using the full set of results the 3950X is only 3% faster than the 3900X, 5.9% faster than 3700X, 3.5% faster than the 10900K, and 5.2% faster than the 9900K. Yet the author's chart shows the 3950X leads by: 15.5%, 43.6%, 11.5%, and 42.3%. His results would make more sense if you're only looking at tests where multi-threaded scaling is good, but I'm not sure that's what's going on.

My spreadsheet is here if you'd like to check what I've come up with: https://drive.google.com/file/d/1c1MJLHRJ5UgXl-f86KRxl8Wn6Ujiql3i/view?usp=sharing

Anyway, I'm not saying the testing was wrong, but something in the "Geomean" chart seems off. I'm going to ping the author for comment, hopefully he can explain the charts (or correct them for future testing).
 
His results would make more sense if you're only looking at tests where multi-threaded scaling is good, but I'm not sure that's what's going on.
Well that would be one way to interpret geomean,get rid of outliers in this case
anything not running as well as wanted.

So how do you see the individual apps they used, are they representative of what a normal person might run?! It does have rendering flac and inkscape but what about the rest?Is it just mumbo jumbo or something that actually is getting used on a day to day basis.
 
Well that would be one way to interpret geomean,get rid of outliers in this case
anything not running as well as wanted.

So how do you see the individual apps they used, are they representative of what a normal person might run?! It does have rendering flac and inkscape but what about the rest?Is it just mumbo jumbo or something that actually is getting used on a day to day basis.
No, throwing out outliers is absolutely not "one way to interpret geomean." The chart says "Geometric Mean of All Test Results" and the text says "Here is the geometric mean of all the CPU benchmarks carried out besides the gaming tests." That's pretty freaking clear.

Geometric mean is a math formula that gives "unbiased" weight to each result. It's useful when trying to find an "average" of a bunch of numbers that aren't in the same unit. So if you average (arithmetic mean) 1, 100, and 1000000 you end up with 333367 -- the large number totally outweighs the small number. Geometric mean gives each equal weight, and you get 464.16.

Considering the units of the benchmarks being used at Phoronix, you'd absolutely have to use a Geomean calculation -- some test results are in the 0-20 range, one test is in the 900K-2M range, and another is in the 21M-47M range. Using the geomean isn't the issue; using it incorrectly or not explaining what your chart actually says is a problem.

As to the benchmarks themselves, it's a Linux-based OS so for home users it represents maybe 1-2% of all users. It's more representative of server and virtualization workloads I'd say. 99% of people never do 3D rendering, probably at least 95% don't do video encoding or editing, another 97% (give or take) won't ever do anything related to compiling code, etc. But professionals in the computing industry would use all of those at much higher rates, and professionals are precisely the people who would be looking at buying a 3950X or maybe 10900K.
 
Last edited:

Deicidium369

Permanantly banned.
BANNED
Mar 4, 2020
390
61
290
Netburst's problem wasn't so much a "too much power" issue as it was a too many sacrifices for clocks one since Intel sacrificed 40-50% of its IPC relative to Coppermine/Tualatin to get there. Netburst needed to clock almost twice as high out of the gate to beat the P3 under all circumstances, kind of awkward when your 2GHz top-end new part barely beats your 1-1.3GHz previous-gen parts.

As for 10nm and beyond, hitting the physical node dimensions (or at least something close enough to call it as such) is only half the battle, still got to get the process to also yield the expected performance and quantities. As far as we can tell from available Icelake SKUs, 10nm+ does not appear to be there yet. Hopefully things will go more smoothly for 7nm in 2021.

I'm not expecting much out of 10nm. At this point, 10nm is mainly about Intel having to prove to investors that the billions it spent getting 10nm to work as intended as Intel told investors that 10nm was "on track" while delaying 10nm products due to setbacks for years in a row were not completely wasted.

Ice Lake was 10nm. I enjoy the overall pretty large speed up over my previous Dell 13 2-in-1 with my Ice Lake 1065G7- noticeably quicker, more responsive and ~3hrs more battery.

10nm+ - pretty sure they yield is quite healthy - Tiger Lake which will be mass market - 10nm Xeon, and would imagine the Stratix will also see benefits from going from 10nm to 10nm+. There will also be a discrete Xe HP AIC on 10nm+ as well.

One of the early clues that 10nm+ was on track was limiting Cooper Lake to 4 & 8 socket, and ICL to 1 and 2 sockets. Pretty confident to make that change - and the workloads are much different between Ice Lake and Cooper Lake - if they weren't - then Cooper would be available for 1,2,4&8 sockets.

As far as Investors - most of them don't seem to care when the dividend is still growing - regardless of delays in a particular process.

10nm will be a short node - without a doubt. The work with Cobalt as an entire conductive layer will and probably already has been applied to Intel's 7nm... With 10+ we are getting the 2.7x increase that set Intel back with 10nm.

Hope you and yours are well.