Intel: 8th-Gen Processors Are 30% Faster Than 7th-Gen

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

scorpio14og

Prominent
May 31, 2017
1
0
510
Intel starting dying out with Microsoft... what a sweet news (2 lazy and greedy bastards with no real improvement for decade)
 

ammaross

Distinguished
Jan 12, 2011
269
0
18,790


Cannonlake, due to slipping node-shrink schedules, is going to land on a "14nm++" process optimization, not a 10nm shrink. This simulated exercise giving estimated values is likely responding to an all-core TDP limit throttle with unrestrained single-core boosts to the 4Ghz. Either way, they're just throwing more cores at the problem to make up for the huge disparity between sub-i7 vs Ryzen. It may be interesting to see what core count AMD's APUs will have.
 

bigdragon

Distinguished
Oct 19, 2011
1,113
560
20,160
I am so glad that Intel is finally increasing its core count. They've had laptops and tablets constrained by dual-cores for far too long. AMD may not be in the mobile computing market yet, but their expected entrance is clearly influencing Intel to try and improve.

30% for double the core and thread count is disappointing though. That implies a reduction in per-core performance.
 

blppt

Distinguished
Jun 6, 2008
576
92
19,060
"They've had laptops and tablets constrained by dual-cores for far too long."

Personally I think its disgusting that Intel still gets away with selling some mobile dual cores that are labeled i7. Even i5 is pushing it---when most of us think i5, we think Quad Core with no HT, not dual core with HT.
 

InvalidError

Titan
Moderator

Not at all. SYSMark is a system benchmark intended to roughly represent typical business software performance, not a straight up CPU benchmark. Most of its workload isn't heavily threaded, so the extra cores and threads are of limited use. You get a ~15% performance increase from the clock frequency bump and another ~15% from the extra cores/threads.
 

blppt

Distinguished
Jun 6, 2008
576
92
19,060
I'm wondering, to at least temporarily alleviate the "IPC Brick Wall" that Intel seems to have hit lately, why couldnt they just increase the on-chip cache, frequency, and/or bandwidth? I remember back in the day how much changes to cache size and speed could significantly help CPU performance. Is it because it would be so cost-prohibitive? Or does upgrading the cache just not help that much anymore?
 

There is always that point of diminished or no returns. There's little point in adding cache if current code being run fits within it already.
 

vygag_

Reputable
Jan 18, 2016
21
0
4,520
yea double more cores, double more threads, higher clock speed- shouldn't it be at least double more performance while not counting higher clock speed. Tho it stays at 15W
 

jwl3

Distinguished
Mar 15, 2008
341
0
18,780
Why would they yank out a 7500u and compare it to an 8th gen quad core processor? Shouldn't it be the 7700HQ (4 core) vs the 8th gen 4 core? Are they stacking the deck?
 

dudmont

Reputable
Feb 23, 2015
1,404
0
5,660


Only so much die space too. Intel seems to be wedded to on board graphics and they keep devoting more space to it. Would be nice to see someone at Intel pull their head out of their sphincter and drop onboard graphics, except in only a few skus, then redesign most of the dies, do away with on board graphics and devote more space to actual computing. *shrug*
 

InvalidError

Titan
Moderator

It is far more complicated than that. The "IPC brick wall" comes from the difficulty of keeping execution units fed. Every ~10th instruction in typical code is a branch instruction and Intel's CPUs look up to 192 instructions ahead to find things to put through execution units. Even looking that far ahead, the CPU only finds enough non-interfering instructions to fill an average of four out of its 5-7 execution ports. Then you have the speculative branch mispredictions which cause work to get trashed some of the time. Since the scheduler is already looking nearly 20 loops ahead, there isn't much room left to make the scheduler look any further and produce significant gains.

Increasing the clock speed is easier said than done too as that would require making each stage of the execution pipeline complete its work quicker, which is typically achieved by adding pipeline stage. As Intel discovered with Netburst and AMD found out for itself with FX, sacrificing work done per clock to achieve higher clock frequencies yields horrible power efficiency and no net performance gain.

Making caches 10X larger won't help much. The main IPC bottleneck is simply the sequential and conditional nature of typical code.
 

blppt

Distinguished
Jun 6, 2008
576
92
19,060
"AMD found out for itself with FX"

I'm still not sure to this day why anybody at AMD thought that repeating Intel's mistakes, just with more "physical" cores was a bright idea. I wonder how many millions that cost them in revenue over the years with the bulldozer/piledriver mess versus just refining an older higher IPC architecture. Its like none of them paid attention to what allowed their own Athlons to succeed versus P4, and how Intel was able to reverse that with Core.

Either that or BD/PD was deep into development around the P4 era, before the "Core Revolution" happened at Intel, and it was looking like pure clock speed was going to be the new wave.
 

InvalidError

Titan
Moderator

Back then, Intel thought Netburst was going to hit 10GHz and it had demonstrated an ALU running at 8GHz to back that belief up but the clock ramp hit a steep process and power incline at ~3.6GHz which we're still on today.

I'm curious to see how far ARM will go before hitting its clock, power and IPCPC brick walls.
 

Karadjgne

Titan
Ambassador
Personally it just sounds to me like Intel freaked out a little and just threw something out there after realizing that their mainstream cpus can't compare to the Ryzen in anything but pc gaming, and have lost that nice, comfortable edge it enjoyed for so long over AMD's FX cpus.
 
I'd welcome a 20% performance improvement at the same price, regardless of how they pull it off.

That said, it may be telling that they're only talking about mobile CPUs. It's easier to get performance gains from those than it is on high-end desktops. If they got that kind of performance boost (at the same price) in the HEDT line from one gen to the next, it'd get some serious attention. On mobile, not so much.

What would really suck is if they launched this new CPU at a 60% price increase as they've done with the HEDT line for the last few generations.
 
Because they're comparing products in the same place on their product stack. So if you go from the 15W Kaby Lake to 15W Coffee Lake, you see a 30% improvement. The problem is when they take a ~95W 4C/8T -K series and compare it to the Coffee Lake version that replaces it, you won't see a core/thread doubling, so you won't see near that increase unless...

...maybe Intel's increasing cores across the product stack? Unlikely.

 
It's pretty weird that I hit 3.6GHz on a Core 2 Quad Q9400 from 2008 on mediocre air cooling and we're still only a bit beyond that. If I'd gone back 9 years then, I'd be looking at my Pentium 2 MMX running at 400MHz from 1999.

I still use my Sandy Bridge i7-2600K from 2010 in my main desktop that I had at 4.5GHz for years. Nothing today gets much faster than 4.5GHz and IPC improvements haven't been radical enough to really encourage an upgrade. Five generations, and it's only maybe 25% improvement? If I went to 7 years before that (which is the age of that processor now), I'd have a single core Athlon 64 at 1.8GHz, which was totally obsolete in half that time.
 
FX? Did they mean Bulldozer? Because Athlon FX was awesome.

AMD made mistakes with Bulldozer, but the biggest one was actually that they expected software to utilize more cores a decade before it really happened (so more like 2020 is when cores will be properly utilized).

Intel's P4 and Netburst was a BAD architecture because of critical flaws with it's queue and branch prediction, at least that's what I remember. Implementing HT actually helped a lot there because it alleviated their backed up queue problems. Now HT is worse when cores are properly loaded...and I think they have much shorter queues.

On the other hand...what made Athlons do well? I forget. I remember what made Pentium 4's do so badly that similarly clocked Pentium 3's often beat them.
 
Remember AMD's "The Future is Fusion", where they pushed the APU? That didn't work well. I wish everyone went back to graphics on the northbridge.
 

ERIC J

Honorable
Jan 14, 2014
563
0
11,010
wow i got behind the times, i a still running a 3rd gen 3770K overclocked
for gaming so i wonder how much performance increase i would have with a 8th gen i7?
 


Well, it depends. What GPU are we talking about? What resolution? 60fps monitor or 144?

Most likely, no real worthy improvement. Wat for real optane and upgrade at that time (?)
 

Karadjgne

Titan
Ambassador
I'm still pushing a 3770k @4.6, gtx970, 2x 1080p/60. Of course I have only a AA game and a bunch of A-B-F's so an upgrade really isn't worth the cash until the mobo quits. Even then it will prolly mean a trip through ebay, see what lga1155 Z boards are left.
 

blppt

Distinguished
Jun 6, 2008
576
92
19,060
"AMD made mistakes with Bulldozer, but the biggest one was actually that they expected software to utilize more cores a decade before it really happened (so more like 2020 is when cores will be properly utilized)."

Even then, though, it still only really happens in ideal core-saturating cases such as an h.265 encode, and very rarely in the real world---you would think chip engineers and executives could have figured that out if we laypeople all could.

"Intel's P4 and Netburst was a BAD architecture because of critical flaws with it's queue and branch prediction, at least that's what I remember. Implementing HT actually helped a lot there because it alleviated their backed up queue problems. Now HT is worse when cores are properly loaded...and I think they have much shorter queues."

Well, that and its pipeline depth which they *thought* was going to allow them to hit high Ghz before they hit that brick wall with the furnace known as Prescott. In simplest terms, they were chasing clock speeds instead of IPC, which is kind of the trouble AMD got in with BD/PD---higher physical core count at the expense of single-thread performance, and when they pushed the clock speed to offset Intel's IPC/single thread performance advantage, we got the atrocity that is my 9590, a 220W monster that got beat in most things by my 80 or so watt 3770K.

Edit: That should read my "old 3770K" as I now have 4790K in my Intel boxes.
 

Karadjgne

Titan
Ambassador
What amd should have done with the 9 series is reverse-engineered an intel cpu, and made it better, they could easily have followed Intels IPC with a 154w version of the i7 and called it the 9's. Instead they tried to do the same thing, but using their own 8320, which got them nowhere but hotter. Instead, they waited 10 or so years and finally did do it right, and now you get Ryzen which is close enough to a multiple core Haswell as to not make much difference. Intels answer to that kick in the pants is Coffee Lake but they'll still have to pull a rabbit outta their (hats) if they want to keep bragging that 95% of the Internet is powered by Intel. Gonna be interesting to see if their claims live up to the hype or if it's just another 'tick-tock-thunk' like the Broadwells were.
 

InvalidError

Titan
Moderator

Broadwell was supposed to be mobile/portable/embedded only. Between the effective non-availability of socketed models and outrageous MSRPs when Intel decided to announce a skinny desktop lineup to appease people upset about not getting desktop Broadwell, that's effectively what we got. When your customers demand a product you do not want to manufacture, you make it nearly impossible to obtain and price it out of the market.
 
Status
Not open for further replies.