News Intel announces Arrow Lake fix coming within a month — Robert Hallock confirms poor gaming performance is due to optimization issues

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

Pierce2623

Prominent
Dec 3, 2023
483
366
560
Zen 5 launch, mixed bag of results: "stick with Zen 4 for now, but it'll get better"
Intel ARL launch, mixed bag of results: "ARL is a disaster, Intel must die"

It may be fashionable (and warranted in some cases) to pig-pile on Intel right now, but maybe cut them the same slack as others for a product fresh out of the oven? A strong Intel is good for the industry.
Zen5 didn’t show regression. That’s a big difference.
 

A Stoner

Distinguished
Jan 19, 2009
372
138
18,960
Zen5 didn’t show regression. That’s a big difference.
It also came with significant power savings and still grew performance. The 9800X3D came with a power increase and significant improvements.

Arrowlake for whatever reason saved power (process incapable of handling more power?) and lost performance. Intel has not been shy about using their CPUs as room heaters. I know, out of the 6 computers in my house right now, 5 of them are Intel. That power consumption was one of the biggest reasons I chose to upgrade to the 7950X3D.

I am hoping the 9950X3D does the same thing. Is able to bump the power a bit, open it to over-clocking, have a cache under both compute chiplets and have 30% boost in game performance.

If it does, I will upgrade my 7950X3D and use the 7950X3D in my backup computer.
 

bit_user

Titan
Ambassador
Arrowlake for whatever reason saved power (process incapable of handling more power?) and lost performance. Intel has not been shy about using their CPUs as room heaters. I know, out of the 6 computers in my house right now, 5 of them are Intel. That power consumption was one of the biggest reasons I chose to upgrade to the 7950X3D.
The 285K is certainly capable of burning quite a lot of power:

4Z8GEboA6GJgLbq7W3iMPD.png


Even at stock settings, it's burning 245W vs. the 9950X's 190W. It also seems to overclock pretty well, which suggests to me that power-handling might not be the main thing holding it back.

With the 9950X, the penalty you pay is on idle power. This seems the fault of keeping the same IO Die they used in Ryzen 7000.

LMS3A32uFLLbNBeL2xmqfD.png


If this is a gaming machine, with a big GPU and power-hungry memory, maybe that extra 25W isn't such a big deal. Your system's idle power going from like 60W to 85W probably won't change a lot about how you use it.

Since I don't really need an upgrade right now, I'll be sitting out this generation. Maybe when both Intel and AMD update their IO chiplets, latency and power will improve, respectively.
 
  • Like
Reactions: artk2219
Oct 29, 2024
10
8
15
he wants to really stress that they are internalizing the mistakes and wants to fix the issues. So brave coming from a company , who has consistently been pointing fingers at everyone else while they were the problem the entire time for the past many years ........ It does not come off genuine whatsoever. It comes off as "oh now we probably cant tell that lie anymore so well try something else - what about this one; somehow our QC team did not realise that the chips was not working as intended whatsoever, so there is some discrepancy with reviewers". Its just plain stupid, so nobody at intel figured to try out a consumer motherboard on a retail version of windows , or what are you saying exactly mr marketing guy? Until every executive is fired from that company, im not looking at intel as a serious company anymore. Its just pure brand and profit margins.

They are the type example of everything wrong with publicly traded multi billion companies.Its so funny how few years ago they be like, "yeeea we just gonna throw money at the problem until we come out on top GPU and benefits from pairing with intel CPU will pwn" , a couple of bad quarters and here we are - instantly the money looses trust in its own plan :D

Like , Pats grand plan - a few bad quarters and its all thrown out the window............... and the copers start reasoning WHY it is smart - nobody said that back when the grand plan was presented lol. Its so obviously a <Mod Edit> argument to cover for a bad situation caused by incompetence. And all people with a normal head knew from the start that the grand plan was just a tad optimistic. Now it is genius to trash an entire node, because the new node is so much better - nobody mentions WHY it is so much better, which is because 20A was so horribly bad that its not practical to mass produce. Its not a choice of genius, its a choice by force of ones own failure.
 
Last edited by a moderator:
  • Like
Reactions: artk2219

bit_user

Titan
Ambassador
all people with a normal head knew from the start that the grand plan was just a tad optimistic. Now it is genius to trash an entire node, because the new node is so much better - nobody mentions WHY it is so much better, which is because 20A was so horribly bad that its not practical to mass produce. Its not a choice of genius, its a choice by force of ones own failure.
I was always suspicious of nodes like Intel 4 and 20A. It just seemed weird to have those nodes, when only a couple of their products were using them.

The other weird thing about 20A vs. 18A is how they both have the same features. It's not even like Intel said "we'll do GAA in 20A and add backside power in 18A". That made 20A seem really ambitious and it probably took longer than if they'd just differentiated the nodes more.
 
  • Like
Reactions: artk2219

Pierce2623

Prominent
Dec 3, 2023
483
366
560
I was always suspicious of nodes like Intel 4 and 20A. It just seemed weird to have those nodes, when only a couple of their products were using them.

The other weird thing about 20A vs. 18A is how they both have the same features. It's not even like Intel said "we'll do GAA in 20A and add backside power in 18A". That made 20A seem really ambitious and it probably took longer than if they'd just differentiated the nodes more.
Ok so the thing with Intel4 and 20a is they were limited earlier versions of their respective node families that only had the libraries for a compute tile but not the entire SoC. They were supposed to be the stepping stones into each series of nodes and instead they just turned out to be garbage.
 

bit_user

Titan
Ambassador
Ok so the thing with Intel4 and 20a is they were limited earlier versions of their respective node families that only had the libraries for a compute tile but not the entire SoC.
Well, yes although even if Meteor Lake used Intel 3 and Arrow Lake used 18A, those nodes would still only be used for the compute tile.

They were supposed to be the stepping stones into each series of nodes and instead they just turned out to be garbage.
The thing about a stepping stone is that you move off of it and onto the next node. However, when you make a product offering on a node, you have to scale up production and sustain it for years. That's much different and more costly than the internal development nodes they create.

Intel reportedly had yield problems on Intel 4 and didn't want to spend time optimizing it, because it's not their main production node. That resulted in Meteor Lake being fairly unprofitable for them. That tells me it's probably a bad idea to have a node for (a limited part of) one specific product. You want a node that's going to be high-volume, so you can justify the investments into refining it to be high-yielding and profitable.

I think this highlights a fundamental flaw in the "5 nodes in 4 years" strategy, which is that it's too expensive. Intel spent more than half a decade milking their 14nm family. They also spent about 5 years milking their 10 nm family, if you start counting all the way back from Ice Lake and include the derivative ESF node used for Raptor Lake. If you suddenly switch to a model of essentially making a new node each year, not only are you pushing the R&D teams, but you also have much less time to refine & recoup your investment on these nodes. Even if we lump together Intel 4 + 3 and 20A + 18A, to be equivalent with how I grouped those earlier nodes, it's still a more compressed timeline for recouping an investment that we're told is getting ever bigger.

BTW, I really wonder how much worse Arrow Lake would've been on Intel 3. If it truly is a 3nm-class node, then it should be mostly comparable to TSMC N3B (which was their first-gen 3nm process).
 
Last edited:
Oct 29, 2024
10
8
15
Well, yes although even if Meteor Lake used Intel 3 and Arrow Lake used 18A, those nodes would still only be used for the compute tile.

So spot on. And the copers need to realise that you cannot take both stands with a straight face. Either the grand plan was genius or the cancelation was genius. They cant both be genius. They can both be not genius though.

It follows from pure deduction and logical reasoning this conclusion. You dont even have to know anything about digital design.