News Report: Intel Battlemage arriving in 2024, Arrow Lake will consume 100W less power than 14th Gen, overclocking unaffected by latest Raptor Lake mic...

Status
Not open for further replies.
"The updated process [node] will eleminate[sic] previous high voltage issues, ensuring stability."
I'm trying to wrap my brain around this sentence. I suppose it's already known that 13th and 14th gen defects are at the hardware level, but can a microcode fix genuinely solve it without causing any performance penalty?
 
  • Like
Reactions: NinoPino
The concept is that the voltage asked for was spiking up higher then it was supposed to due to errors in the Thermal Velocity Boost code. So allowing excess voltage while the CPU was hot.

A fix is as simple as preventing those voltages from being called for until the CPU temperatures are normal. As long as the CPU boost behavior remains similar, performance impact should be minimal.

It doesn't solve the damage that was done though.

Also, yay Battlemage! I have high hopes for a AsRock B770 as a replacement for my 3080 Ti. I want to put my last EVGA card up on a shelf.
 
I'm trying to wrap my brain around this sentence. I suppose it's already known that 13th and 14th gen defects are at the hardware level, but can a microcode fix genuinely solve it without causing any performance penalty?
Hypothetically, if the only two issues were indeed the via oxidation issue that supposedly was fixed and elevated voltages, then it would be down to silicon lottery whether there'd be any downsides in terms of performance due to microcode changes. Some CPUs may not boost as high as a result while others may not see any differences outside of potentially running cooler. Ultimately we'll have to see - I'm certain there's more changes outside of the eTVB fix that we're not being told about yet, until people get their hands on updates, who knows what could happen.
 
Why would Intel, an American company, reveal its secret plans to a Chinese journalist at an even no one's ever heard of?
These are all good guesses and they might turn out to be true but that doesn't mean this person knows what they're talking about
 
  • Like
Reactions: NinoPino
I'm trying to wrap my brain around this sentence. I suppose it's already known that 13th and 14th gen defects are at the hardware level, but can a microcode fix genuinely solve it without causing any performance penalty?
If the main cause of progressing instability is exposure voltages over 1.5v then AMD has this known hardware defect much worse. 1.35v can cause parts of their chips to explode. And we don't even have to mention Nvidia. Could you imagine one of their GPUs at 1.5v?

But really, it isn't already known that 13th and 14th gen are inherently defective at the hardware level. Why are you passing that conjecture off as established fact? The exact causes and extent of damaged hardware from each is not yet determined.
 
We'll have a good idea as to the baseline Battlemage performance when LNL launches next month. Personally I'm more interested in whether or not they've solved the hardware issues that Alchemist has had which cause its performance to vary wildly than overall performance increase. Just one of those things where if it's 50% faster than when it performs lower than a 6600 this isn't a big leap compared to if it's 50% faster than when it performs like a 3070.

As for ARL power consumption I doubt it'll be that much lower in desktop form unless comparing unrestricted RPL. I just don't see Intel going. from ~250W to ~150W parts so if that does happen to be true it'll be workload specific. There are a lot of things they may be able to do dynamically to lower overall power consumption though. I base this opinion on a couple of RPL facts:
  • APO has shown higher performance with lower power consumption
  • It's possible to tune the power consumption to meet or exceed AMD's efficiency in heavy/light workloads, but not both at the same time
 
But really, it isn't already known that 13th and 14th gen are inherently defective at the hardware level. Why are you passing that conjecture off as established fact? The exact causes and extent of damaged hardware from each is not yet determined.
Maybe it's my headache that's giving me trouble, but what I mean was it sounds a little too celebratory. If the microcode fix currently being rolled out resolves the problem with minimal setback, then why brag that the next chip doesn't have the fault? Let's suppose Arrow Lake does have the defect. Would it matter now that Intel has a fix for it.....assuming Intel does have a fix? Apparently Intel thinks so, and that worries me because you only do a victory dance on the new node being defect-free if the current node's microcode update is not perfect.
 
Why would Intel, an American company, reveal its secret plans to a Chinese journalist at an even no one's ever heard of?
These are all good guesses and they might turn out to be true but that doesn't mean this person knows what they're talking about
This was a confidential presentation to Asus, so the information is likely accurate if it was reported correctly.
 
  • Like
Reactions: KyaraM
Maybe it's my headache that's giving me trouble, but what I mean was it sounds a little too celebratory. If the microcode fix currently being rolled out resolves the problem with minimal setback, then why brag that the next chip doesn't have the fault?
You're conflating two different things: a bug in the algorithm used to determine operating voltage and high voltage being required for high clocks. A properly functioning RPL part is still going to be demanding 1.5V+ for maximum boost. To put that in perspective for you putting 1.5V through the 6th Gen HEDT part I use will very likely fry it almost immediately. If you look at Zen 4 voltages they go well over 1.4V to get their boost clocks (I'm not sure if they hit 1.5V as I don't have any Zen 4). Hypothetically speaking if Intel is able to pull something like 5.7Ghz on ~1.3V across all CPUs that would be a game changer.
 
  • Like
Reactions: KyaraM and rluker5
You're conflating two different things: a bug in the algorithm used to determine operating voltage and high voltage being required for high clocks. A properly functioning RPL part is still going to be demanding 1.5V+ for maximum boost. To put that in perspective for you putting 1.5V through the 6th Gen HEDT part I use will very likely fry it almost immediately. If you look at Zen 4 voltages they go well over 1.4V to get their boost clocks (I'm not sure if they hit 1.5V as I don't have any Zen 4). Hypothetically speaking if Intel is able to pull something like 5.7Ghz on ~1.3V across all CPUs that would be a game changer.
OK, then please correct me if I'm misunderstanding. You're suggesting this enables higher overclocks on Arrow Lake versus Raptor Lake. I'm interpreting the phrase "previous high voltage issues" in the article as referencing the microcode bug, not necessarily leading to higher clocks.
 
OK, then please correct me if I'm misunderstanding. You're suggesting this enables higher overclocks on Arrow Lake versus Raptor Lake.
No I'm saying that they're likely referring to required operating voltage. They're not necessarily going to be running higher clockspeeds and in fact I highly doubt they will at all just lower voltage to reach existing.
I'm interpreting the phrase "previous high voltage issues" in the article as referencing the microcode bug, not necessarily leading to higher clocks.
You're reading too much into a machine translation of a post to Weibo. The only logical reason for them to be talking about voltage with regards to the process node is if it requires less voltage to maintain clockspeeds.
 
No I'm saying that they're likely referring to required operating voltage. They're not necessarily going to be running higher clockspeeds and in fact I highly doubt they will at all just lower voltage to reach existing.

You're reading too much into a machine translation of a post to Weibo. The only logical reason for them to be talking about voltage with regards to the process node is if it requires less voltage to maintain clockspeeds.
OK. I take it they're talking about the Raptor Lake bug, but I guess we'll see how it pans out soon enough.
 
Feel free to believe whatever you want but if you're right that means they're directly contradicting the official line from Intel. It makes zero sense for Intel to put themselves in legal jeopardy for some presentation in China with Asus.
I don't see how that is the case. I just think they're trying to talk up Arrow Lake and may have inadvertently talked down the microcode fix as a result. As for legal jeopardy, I think they crossed that line a long time ago. It IS Intel, after all. But I don't wish to argue. We said our peace and I genuinely want to see how it turns out. I'd rather be wrong on this.
 
I'm trying to wrap my brain around this sentence. I suppose it's already known that 13th and 14th gen defects are at the hardware level, but can a microcode fix genuinely solve it without causing any performance penalty?
Only a limited number of 13th gen, and 13th gen only, CPUs produced last year have via oxidation issues due to a problem with the process node. The main issue is a messed up VID table, allowing the CPU to essentially overvoltage and fry components that are unable to handle them due to how the CPU is designed.

These are the official Intel statements as reported by tech outlets like GN. It's hard to confirm or deny for the average person, but at least officially, no, via oxidation is NOT the main issue here. Please do not spread unconfirmed information around.

On the topic of them "bragging" about Arrow Lake not having the issue, where the heck? That's not a darn brag, that's a statement. And you don't know the context of the statement (if we consider the story to be true), what if the Asus rep asked after the issue? In that case, there is no downplaying the fix or any nonsense like that, and even without I frankly don't see that. Let's say they were asked, though. If your customer asks "will your next product have the same error?", then of course you would answer "no, no, it's fine!", provided you aren't lying about it (and to be quite honest, I won't put it past ANY big corp to lie about something like this... though in this case it might very well nip them in the butt if they did and it can be proven they did, again provided the story is real in the first place). Another possibility would be them having said that Arrow Lake won't come with the out of the box. There are frankly too many possible ways things could have gone down to make any accusations of anything here.

On the actual topic. 100W less power draw would be a humongous improvement if real. It looks like both companies focus is energy efficiency this gen if correct, which would be really good. Jury is still out here of course considering it's basically a rumor, but it would be quite nice if true.
 
  • Like
Reactions: rluker5
A reduction in consumed power by "at least 100W" sounds good. However, even if this should be a true statement (which we don't know yet), it might not be as great of a power reduction as one might think. The highest officially specced turbo power of the 14900k is 253 W (official data listed by Intel). What we don't know is, whether Meteor Lake will consume 100 W less only in base power state or at max turbo state. So it could be a 25W part or a 153W part., depending on the starting point from which these 100W have been subtracted.
 
A reduction in consumed power by "at least 100W" sounds good. However, even if this should be a true statement (which we don't know yet), it might not be as great of a power reduction as one might think. The highest officially specced turbo power of the 14900k is 253 W (official data listed by Intel). What we don't know is, whether Meteor Lake will consume 100 W less only in base power state or at max turbo state. So it could be a 25W part or a 153W part., depending on the starting point from which these 100W have been subtracted.
I think the post above yours shows it rather nicely tbh, what they mean.
 
...

Also, yay Battlemage! I have high hopes for a AsRock B770 as a replacement for my 3080 Ti. I want to put my last EVGA card up on a shelf.
Huh!? If it performs at the 3080 Ti level, that will be one thing (might be within spitting distance but probably closer to non-Ti3080?), but I really doubt it would be a worthwhile upgrade.

I can understand though wanting to preserve your EVGA GPU. :)
 
Define "at least 100W less"; is this peak, typical/average under moderate loads, or what? I'd naturally assume it's peak. That's great, except 300W minus let's say 125W is still 175W, which is great if that's at the same or better performance, but Raptor Lake left a lot of room for improvement in the power efficiency arena, so this isn't saying much. With SMT going away, I don't see highly multi-threaded perf increasing but indeed the power savings would be highly appreciated. So, MT perf is really going to rely on the combination of increased per-core IPC and stuffing more cores under the hood.
 
Huh!? If it performs at the 3080 Ti level, that will be one thing (might be within spitting distance but probably closer to non-Ti3080?), but I really doubt it would be a worthwhile upgrade.

I can understand though wanting to preserve your EVGA GPU. :)

The oldest rumors were RTX 4070 like performance way back before the driver fixes, so that would be a side grade, definitely. I just think it would be fun to have a non flawless computer experience, I would enjoy the troubleshooting.

If the current rumors are correct, B770 should be 64 Xe2 cores, or 8192 shaders on a 256bit bus.

Those numbers look like the RX 7800 XT 60 CUs or between Nvidia's 4070 Ti and 4070 Ti Super (matching the Supers 256bit bus)

As long as they can get the drivers to work, it should be decent.
 
  • Like
Reactions: DS426
The oldest rumors were RTX 4070 like performance way back before the driver fixes, so that would be a side grade, definitely. I just think it would be fun to have a non flawless computer experience, I would enjoy the troubleshooting.

If the current rumors are correct, B770 should be 64 Xe2 cores, or 8192 shaders on a 256bit bus.

Those numbers look like the RX 7800 XT 60 CUs or between Nvidia's 4070 Ti and 4070 Ti Super (matching the Supers 256bit bus)

As long as they can get the drivers to work, it should be decent.
Ah yes, the tinker must tinker. :) I can deeply appreciate that

256-bit GDDR6X?
 
A770 had 17500, and the top spec for GDDR6 is 18000, which is in the lower end RTX cards.
There's also 20gbps which is what AMD used on the high end 7000 series cards. I don't think 24gbps ever hit mass production as there wasn't a market for it. GDDR7 and the AI craze really did a number on additional memory production.
 
Status
Not open for further replies.