News Intel's stock drops 30% overnight —company sheds $39 billion in market cap

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Also can't forget "X86S" aka the adoption of FRED which I think would effectively be the biggest x86 cleanup to date.

https://www.intel.com/content/www/u...visioning-future-simplified-architecture.html
They have different purposes. X86S is about simplifying the ISA by removing some legacy cruft that's no longer used. APX is about patching the ISA to have some of the features and characteristics of ARM and RISC-V (plus a few other goodies). The former enables hardware implementations to be simplified, whereas the latter enables compilers to generate more efficient code.

The nice thing about APX is that you can have the same ABI and just dispatch down different codepaths, based on whether a CPU has it. The downside is that it doesn't go as far towards a total cleanup of the ISA as you could do with a clean sheet design.

Regarding FRED, I think you're off the mark. It has nothing to do with X86S, as far as I'm aware. I believe it's about optimizing userspace/kernel transitions, which should improve syscall performance.
 
  • Like
Reactions: thestryker
Regarding FRED, I think you're off the mark. It has nothing to do with X86S, as far as I'm aware. I believe it's about optimizing userspace/kernel transitions, which should improve syscall performance.
Yeah I was misremembering the video Ian Cutress and George Cozma did where they covered both and X86S will only work with FRED from what it sounded like.
 
Last edited:
The financials have nothing to do with RPL, but the first half is certainly right.
I'm not talking about finances.
I am talking about the total lack of innovation and advancement from 3rd gen to 12th gen.
11th gen desktop was pretty bad, a waste of silicon even.
But that was on the previous CEO.

Gelsinger, on the other hand, rushed 13th gen's development cycle. Adding more cores and boosting frequency compared to 12th gen turned out to be not as easy as initially expected.
 
They had competition, but their marketing department was very good at explaining you needed an Intel CPU for gaming to a point where a lot of people went Intel simply because of the marketing hype.
I'm sorry, but Phenom and Bulldozer isn't exactly competition.

In the last 10 years, AMD has been steadily getting it's act together, while Intel has been imploding. First they had record losses that left us all slack jawed, now we have this.

I'm frankly surprised Intel isn't about to do a Syquest and file for bankruptcy.
 
  • Like
Reactions: Sluggotg
I'm not talking about finances.
I am talking about the total lack of innovation and advancement from 3rd gen to 12th gen.
11th gen desktop was pretty bad, a waste of silicon even.
So, Ivy Bridge to Rocket Lake? I think that's an exaggeration. Haswell introduced AVX2 and TSX/HLE (sadly, it didn't really work until Skylake). Broadwell introduced per-core throttling, to better deal with AVX2-induced clock throttling. Skylake-SP introduced AVX-512, although this turned out to be a couple nodes too soon. There were also substantial improvements in Intel's QuicsSync video compression engine in Haswell and Skylake.

What I think we can now say the main missed opportunities were:
  • more cores for minstream CPUs, in IvyBridge or Broadwell.
  • more IPC in Haswell and Skylake.

Still, your remark about Rocket Lake (Gen 11) argues that Intel probably didn't leave too much IPC on the table, or else they'd have ended up with huge, hot cores like those. The hybrid core strategy of Alder Lake is what enabled them to overcome that hurdle.

Gelsinger, on the other hand, rushed 13th gen's development cycle.
Intel always releases a new CPU every year. From what I've heard, Raptor Lake was just done to buy them more time for Meteor Lake. So, it's really more like the opposite of what you're saying.

Adding more cores and boosting frequency compared to 12th gen turned out to be not as easy as initially expected.
You blame such decisions on the CEO of a company with > 100k employees? Wow. I wouldn't be surprised if he couldn't even tell us how many cores the Raptor Lake die has, what its peak boost frequencies are, or its TDP rating.

It's said the CEO of a company is its external face. The COO is the one running things internally. Even then, they're dealing at the level of which markets to play in, mergers & acquisitions, hiring & firing VPs & general managers, litigation, and tax & finance-related issues. The C-suite of a big multinational looks a lot more like an investment bank than whatever you probably think they do.
 
  • Like
Reactions: Sluggotg
It may feel good to root for Intel's downfall after some of the things they've done over the years, but this is unfortunate news for all of us. I've owned both AMD and Intel products since way back in the '90s. Whether you’re a fan or a critic, the reality is that competition is crucial for consumers. I don’t think Intel is going anywhere, but if they falter, do you really believe AMD won’t end up in the same position without competition in 5-10 years? Competition breeds innovation and lower prices for us, the consumers. Just look at what Nvidia has done to GPUs as the monopoly powerhouse. I personally hope Intel gets its act together; sometimes companies need a rough patch to right the ship and I hope it doesn't sinking the ship.
 
  • Like
Reactions: Sluggotg
I'm sorry, but Phenom and Bulldozer isn't exactly competition.

In the last 10 years, AMD has been steadily getting it's act together, while Intel has been imploding. First they had record losses that left us all slack jawed, now we have this.

I'm frankly surprised Intel isn't about to do a Syquest and file for bankruptcy.

I was not talking about the past. I was talking about the present. Most gamers will still purchase an Intel CPU because it's "better" than a Ryzen 5000 or 7000 for gaming.
That belief is based on marketing from the past 10 years.

Same is true for Video cards btw. Can't game decent on an AMD card, has to be an Nvidia card apparently...
 
  • Like
Reactions: bit_user
So, Ivy Bridge to Rocket Lake? I think that's an exaggeration. Haswell introduced AVX2 and TSX/HLE (sadly, it didn't really work until Skylake). Broadwell introduced per-core throttling, to better deal with AVX2-induced clock throttling. Skylake-SP introduced AVX-512, although this turned out to be a couple nodes too soon. There were also substantial improvements in Intel's QuicsSync video compression engine in Haswell and Skylake.

What I think we can now say the main missed opportunities were:
  • more cores for minstream CPUs, in IvyBridge or Broadwell.
  • more IPC in Haswell and Skylake.

Still, your remark about Rocket Lake (Gen 11) argues that Intel probably didn't leave too much IPC on the table, or else they'd have ended up with huge, hot cores like those. The hybrid core strategy of Alder Lake is what enabled them to overcome that hurdle.


Intel always releases a new CPU every year. From what I've heard, Raptor Lake was just done to buy them more time for Meteor Lake. So, it's really more like the opposite of what you're saying.


You blame such decisions on the CEO of a company with > 100k employees? Wow. I wouldn't be surprised if he couldn't even tell us how many cores the Raptor Lake die has, what its peak boost frequencies are, or its TDP rating.

It's said the CEO of a company is its external face. The COO is the one running things internally. Even then, they're dealing at the level of which markets to play in, mergers & acquisitions, hiring & firing VPs & general managers, litigation, and tax & finance-related issues. The C-suite of a big multinational looks a lot more like an investment bank than whatever you probably think they do.
I feel that Broadwell to Skylake was when the SHTF cycles started. Intel should've already started upping the core count. They also sat on 14nm for over half a decade with the Skylake series.
 
I feel that Broadwell to Skylake was when the SHTF cycles started. Intel should've already started upping the core count. They also sat on 14nm for over half a decade with the Skylake series.
The plan was there to increase core counts as early as 2015. Problem was - manafacturing issues. They weren't going to increase core counts on 14nm - the plan was to do it at 10. As 10 kept getting post postponed, and amd launched ryzen, intel was forced to do the best it could at their aging 14nm. And they did a decent job until Rocket lake which was a complete failure.
 
It may feel good to root for Intel's downfall after some of the things they've done over the years, but this is unfortunate news for all of us.
The thing that really shook me is the Raptor Lake degradation. I've always had the impression that Intel CPUs would be reliable and durable. If we can no longer assume that, then I'm not sure where to turn.

Back in the bad, old days of the Pentium 4, I bought one for 3 reasons:
  1. Because it had the latest technologies, like SSE3 and hyperthreading.
  2. Because I expected it to be stable.
  3. To support Intel, which was taking a real beating from AMD.

For those reasons, I was even willing to put up with elevated temperatures and the need for a higher-end cooler. But, this degrading CPU fiasco has shaken my confidence in a way that is going to be hard for Intel to recover from. For one thing, it shows they don't do the kind of testing and modelling to ensure long life that I always assumed they did. For another, it suggests they prioritized speed over correctness, and I firmly believe that speed only matters if you can be certain the results are correct. Maybe gamers can deal with some degree of instability, but I won't accept it.

I personally hope Intel gets its act together; sometimes companies need a rough patch to right the ship and I hope it doesn't sinking the ship.
I'm not really too worried about the CPU design group. Arrow Lake sounds pretty good and they have even more promising developments in the future. I think Sierra Forest and Granite Rapids will significantly narrow the gap with AMD, in those markets. ARM & RISC-V could pose a real threat, but it's not like Intel can't play in those markets. Margins won't be as good and competition will be more fierce, but Intel and AMD can design good CPUs for any ISA.

Long term, I'm probably most worried about their fabs. We need a competitive foundry outside of Asia, just for supply chain stability.
 
I feel that Broadwell to Skylake was when the SHTF cycles started. Intel should've already started upping the core count. They also sat on 14nm for over half a decade with the Skylake series.
14 nm was about a year late, which resulted in a Haswell Refresh, instead of a desktop Broadwell. Then, their 10 nm process had a seemingly endless series of problems and delays, which is how we ended up getting stuck on Skylake for so long, and the the abomination that was Rocket Lake (which was Ice Lake, back-ported to 14 nm).

The plan was there to increase core counts as early as 2015. Problem was - manafacturing issues. They weren't going to increase core counts on 14nm - the plan was to do it at 10.
But that's the problem. They clearly could increase mainstream core counts on 14 nm and even before - they just decided they could string people along a bit longer. Heck, Sandybridge-E showed they could do 6 cores on a 32 nm node, for a MSRP of $555 (i7-3930K):

Given that, it seems pretty clear to me they could've brought more cores to the mainstream on 22 nm. Perhaps that's what Haswell Refresh really should've been!

The biggest knee-slapper was really Kaby Lake, which was so similar to Skylake that it almost seemed like a re-binned version of the same silicon with some microcode tweaks. If ever there was a time they should've & could've added cores but didn't, it was that generation. I think it really shows us that Intel was doing everything possible to delay the mainstream moving up to 6 cores, for as long as possible. Without Ryzen and their continued 10 nm delays, I think they'd have probably dragged it out even further!
 
Last edited:
  • Like
Reactions: Bikki
But that's the problem. They clearly could increase mainstream core counts on 14 nm and even before - they just decided they could string people along a bit longer. Heck, Sandybridge-E showed they could do 6 cores on a 32 nm node, for a MSRP of $555 (i7-3930K):

Given that, it seems pretty clear to me they could've brought more cores to the mainstream on 22 nm. Perhaps that's what Haswell Refresh really should've been!

The biggest knee-slapper was really Kaby Lake, which was so similar to Skylake that it almost seemed like a re-binned version of the same silicon with some microcode tweaks. If ever there was a time they should've & could've added cores but didn't, it was that generation. I think it really shows us that Intel was doing everything possible to delay the mainstream moving up to 6 cores, for as long as possible. Without Ryzen and their continued 10 nm delays, I think they'd have probably dragged it out even further!
Well they had cheap 6 cores as early as 2014. 389$ MSRP. Considering almost 10 years later 6 cores are still launched at 300$, oh well, that was cheap back then, so what's really the issue?
 
Well they had cheap 6 cores as early as 2014. 389$ MSRP. Considering almost 10 years later 6 cores are still launched at 300$, oh well, that was cheap back then, so what's really the issue?
The problem with those CPUs was the platform. Those motherboards were more expensive and not mainstream or commonly available in smaller form-factors.

I used to assume that more cores naturally required more memory channels, but I think those old 6-cores weren't anywhere close to being memory-bottlenecked, for most things. The real point of their LGA 2011 platforms having more memory channels seems to be just enabling larger memory capacities. Maybe also supporting more PCIe lanes, but mainstream CPUs didn't have that problem.
 
  • Like
Reactions: thestryker
Those motherboards were more expensive
While they were more expensive I'd argue the premium wasn't really that bad unless you compared bottom of the barrel desktop boards. The lowest X79/X99/X299 were all way better boards than the Z68/Z77/Z170/Z270 were so if you compared equal features the premium was a lot lower.
not mainstream
Not really sure what you mean by this given that availability of boards was very high with a lot of different models available from all of the manufacturers.
commonly available in smaller form-factors.
ITX certainly wouldn't make sense for a CPU with that many PCIe lanes, but there only being a couple of MATX boards each generation certainly was a shortcoming.
 
Didn't they also strip hyperthreading from i7s at one point? One page I found pointed to them doing it to the i7-9700K.

When greed and setting on your laurels catches up to you.
Both the 8th and 9th Gen cpus had no hyperthreading in some cpus. For 8th gen it was all the i5's which were 6-cores, for 9th Gen it was all i7s and below, but they went up to 8 cores from 8th gen's 6-cores and introduced I9s with 8 cores and hyperthreading. The i7-8700/8700k were the only 8th gens with hyperthreading.
 
  • Like
Reactions: bit_user
Status
Not open for further replies.