News Intel Arc May Miss Q1 2022 Launch Window

watzupken

Commendable
Mar 16, 2020
542
238
1,270
1
Given the current situation, delays are always possible. However the unfortunate fact is that the delay is going to hurt Intel quite badly since the later they release their dedicated GPUs, the closer these GPUs will tend need to compete against the next gen GPUs from AMD and Nvidia. The consolation for Intel is that they will likely still sell out their GPUs (assuming it works well for gaming and mining), and still make money since all the GPUs prices are super inflated.
 

shady28

Distinguished
Jan 29, 2007
165
52
18,690
5
Intel missed a big opportunity not having Arc ready last year. They are blowing an opportunity to come in gangbusters and take over market share that Nvidia and AMD can't fill due to low production and high demand.

There's no guarantee that crypto and mining will continue to be a thing this year, lots of reasons to think it won't, it's very possible they could release into a market saturated with new and used GPUs later this year. That would be an unmitigated disaster for them.
 
Reactions: VforV

VforV

Respectable
Oct 9, 2019
475
224
2,070
1
We are living in the age of delays, so nothing new here, but yes, intel is coming too little too late.

I've said it multiple times that they should have launched at the latest Q4 2021 and now they are slipping to Q2 2022... that's less than 6 months, probably even 4 months until RDNA3 and Lovelace come with at least 2x perf increase over the best GPUs we have today. And intel is not even gonna deliver a competitor for the best GPUs we have today, which means they will be like -3X behind the top end of next gen. They will look pretty pathetic if proven true...
 

shady28

Distinguished
Jan 29, 2007
165
52
18,690
5
We are living in the age of delays, so nothing new here, but yes, intel is coming too little too late.

I've said it multiple times that they should have launched at the latest Q4 2021 and now they are slipping to Q2 2022... that's less than 6 months, probably even 4 months until RDNA3 and Lovelace come with at least 2x perf increase over the best GPUs we have today. And intel is not even gonna deliver a competitor for the best GPUs we have today, which means they will be like -3X behind the top end of next gen. They will look pretty pathetic if proven true...
The only thing that is going to matter is supply, price/performance will work itself out. Intel's parts are aimed at (normally) low and mid range, we're talking 1650 super - 3070 levels (maybe) of performance.

It won't matter what Nvidia and AMD do if they can't get volume. Intel probably has volume, last year they used their TSMC allocation for HPC on the supercomputer Aurora, but not so much this year. This means they'll be able to knock out serious volume on that TSMC node. Volume is great for taking over market share in a supply constrained market like we have now.

My real point is, if the constrained supply issue dissipates - which it will if crypto crashes (and it seems to be trying to doing so) - that's when the performance\price will matter, when people have choices at relatively low prices.

If crypto shoots back up, then Intel will certainly have no issue selling all it can make at MSRP or higher.

Having said that, we're just talking about desktop dGPU here. Intel is releasing laptop based ARC dGPU, right now. Laptop is about 85% of the client market, no idea what % of the dGPU market it is though.

Nothing is decided here yet because we aren't there yet, but certainly I get the feeling Intel is blowing an opportunity, crypto seems very cyclical with a year or two of heavy gains followed by a couple of years of famine. Intel may wind up selling into the famine ;)
 

VforV

Respectable
Oct 9, 2019
475
224
2,070
1
The only thing that is going to matter is supply, price/performance will work itself out. Intel's parts are aimed at (normally) low and mid range, we're talking 1650 super - 3070 levels (maybe) of performance.

It won't matter what Nvidia and AMD do if they can't get volume. Intel probably has volume, last year they used their TSMC allocation for HPC on the supercomputer Aurora, but not so much this year. This means they'll be able to knock out serious volume on that TSMC node. Volume is great for taking over market share in a supply constrained market like we have now.

My real point is, if the constrained supply issue dissipates - which it will if crypto crashes (and it seems to be trying to doing so) - that's when the performance\price will matter, when people have choices at relatively low prices.

If crypto shoots back up, then Intel will certainly have no issue selling all it can make at MSRP or higher.

Having said that, we're just talking about desktop dGPU here. Intel is releasing laptop based ARC dGPU, right now. Laptop is about 85% of the client market, no idea what % of the dGPU market it is though.

Nothing is decided here yet because we aren't there yet, but certainly I get the feeling Intel is blowing an opportunity, crypto seems very cyclical with a year or two of heavy gains followed by a couple of years of famine. Intel may wind up selling into the famine ;)
I'm certainly not one to advocate for nvidia, but in this regard I'm pretty sure nvidia will have the highest volume in discrete GPUs, but maybe laptops not so much, so I can see intel having there more.

Either way regardless of availability the issue I underlined above is that if they launch in Q2 and their best GPU is let's say the best scenario, at 3070ti level... well in Q3 Lovelace and RNDA3 equivalent class of a 3070ti will have more than x2 performance close to the same price as Arc. So in that case Arc wold need to be half the price to matter, if that happens.

Would you buy a (theoretically) $500 Arc GPU equivalent of 3070ti when you can buy a 4070(ti) for $600? Or an RX 7700 XT/7800 for $600?
Assuming they keep their price difference even if they have scalper prices, no one in their right mind would buy a whole generation older GPU at almost the same price as the new one.... well, except miners probably. So if that happens that 3070ti Arc GPU would need to be $300.
 

shady28

Distinguished
Jan 29, 2007
165
52
18,690
5
I'm certainly not one to advocate for nvidia, but in this regard I'm pretty sure nvidia will have the highest volume in discrete GPUs, but maybe laptops not so much, so I can see intel having there more.

Either way regardless of availability the issue I underlined above is that if they launch in Q2 and their best GPU is let's say the best scenario, at 3070ti level... well in Q3 Lovelace and RNDA3 equivalent class of a 3070ti will have more than x2 performance close to the same price as Arc. So in that case Arc wold need to be half the price to matter, if that happens.

Would you buy a (theoretically) $500 Arc GPU equivalent of 3070ti when you can buy a 4070(ti) for $600? Or an RX 7700 XT/7800 for $600?
Assuming they keep their price difference even if they have scalper prices, no one in their right mind would buy a whole generation older GPU at almost the same price as the new one.... well, except miners probably. So if that happens that 3070ti Arc GPU would need to be $300.
Double the performance is highly optimistic speculation for one. A normal boost would be 20-30% within a tier. i.e. this would make a 4060 = 3070, and a 4070 = 3080. We might get more we might get less, but this is what I would expect.

The reality right now is that you can go to stockx.com and buy a 3070 for $1000, while a 3060 runs around 700. These are the lower prices at this moment, normal retail when you can find one is usually higher than this by 10-20%.

If we combine a reasonable performance assumption for next gen GPUs with reality of market prices , then what you're looking at will be something like :

4060 at 3060 prices is $700 = 3070 performance for $700 which is > high end ARC price of $650

So unless the supply issues abate then Intel will have no issues getting $650 for 3070 level performance even after a theoretical 4060 release.

On a related note, I'm not sure where the thought that these new release will ease supply constraints comes from. NVidia went to Samsung specifically for more volume / lower cost, and now they are going to TSMC. If anything, this is going to put more pressure on TSMC. Basically every GPU in existence is coming off one fab company. This is a formulae for disaster.
 
Last edited:

VforV

Respectable
Oct 9, 2019
475
224
2,070
1
Double the performance is highly optimistic speculation for one. A normal boost would be 20-30% within a tier. i.e. this would make a 4060 = 3070, and a 4070 = 3080. We might get more we might get less, but this is what I would expect.
This is where we part ways, because based on past leaks I actually believe MLiD when he says it will be about 2x perf of this gen, especially the top tiers vs top tiers. He's been saying this for a while now and he was right on many occasions in the past on other leaks, so I have no reasons to doubt it now. Also a few of the trusted twitter leakers are saying the same things...

So based on that we already know what's coming, at least the bigger picture. It will be 2x perf or even more, from both nvidia and AMD. There is a reason RDNA3 is going MCM before everyone else. The level of perf increase will make Turing and RDNA2 seem like expensive GPU bricks, especially in the controversially RT feature (above 2x perf), which will work much better on next gen GPUs.

Because of that I stand for what I said above. Arc in Q2 2022 is too little too late at 3070ti level, best case scenario.
 

JarredWaltonGPU

Senior GPU Editor
Editor
Feb 21, 2020
1,038
924
2,070
1
This is where we part ways, because based on past leaks I actually believe MLiD when he says it will be about 2x perf of this gen, especially the top tiers vs top tiers. He's been saying this for a while now and he was right on many occasions in the past on other leaks, so I have no reasons to doubt it now. Also a few of the trusted twitter leakers are saying the same things...

So based on that we already know what's coming, at least the bigger picture. It will be 2x perf or even more, from both nvidia and AMD. There is a reason RDNA3 is going MCM before everyone else. The level of perf increase will make Turing and RDNA2 seem like expensive GPU bricks, especially in the controversially RT feature (above 2x perf), which will work much better on next gen GPUs.

Because of that I stand for what I said above. Arc in Q2 2022 is too little too late at 3070ti level, best case scenario.
MLID has been BADLY wrong so many times it's hardly even worth discussing. Remember the $150-$200 "RDNA Leak" back in December 2019? Laughable, but he went with it. The cards launched at $400 (after being dropped $50 at the last minute). When you fire multiple shotgun blasts at a target, you'll inevitably get a few hits, but YouTube is notoriously bad at increasing the amount of fake news and speculation.

Yes, Nvidia is going from what is effectively 10nm class Samsung to 5nm class TSMC. That will help a lot. But most likely Nvidia will use the shrink to create smaller chips that are 50% faster than the previous generation at best. And it still needs to increase memory bandwidth a similar amount to scale that much. More than a 256-bit interface is costly, and 384-bit is basically the limit, which means there's a good chance memory bandwidth only increases a bit while GPU compute increases more.

But again, the real problem is that the cost of TSMC N5 is going to be more than double the cost per square millimeter of Samsung 8N. So even if Nvidia wants to do a big 600+ mm^2 chip like GA102, and even if it could feed it with enough memory bandwidth, it will end up being way more expensive than a 3090. So Nvidia will balance performance increases with die size and cost, and probably go after something like an "RTX 4080" that performs 20-30% better than a 3080 with a theoretical price of maybe $999 and a die size closer to 400mm^2. Hopper will still get a massive chip, but that's because Hopper will only go into supercomputers and maybe workstations and those can handle the $15,000 price tag.
 

VforV

Respectable
Oct 9, 2019
475
224
2,070
1
MLID has been BADLY wrong so many times it's hardly even worth discussing. Remember the $150-$200 "RDNA Leak" back in December 2019? Laughable, but he went with it. The cards launched at $400 (after being dropped $50 at the last minute). When you fire multiple shotgun blasts at a target, you'll inevitably get a few hits, but YouTube is notoriously bad at increasing the amount of fake news and speculation.

Yes, Nvidia is going from what is effectively 10nm class Samsung to 5nm class TSMC. That will help a lot. But most likely Nvidia will use the shrink to create smaller chips that are 50% faster than the previous generation at best. And it still needs to increase memory bandwidth a similar amount to scale that much. More than a 256-bit interface is costly, and 384-bit is basically the limit, which means there's a good chance memory bandwidth only increases a bit while GPU compute increases more.

But again, the real problem is that the cost of TSMC N5 is going to be more than double the cost per square millimeter of Samsung 8N. So even if Nvidia wants to do a big 600+ mm^2 chip like GA102, and even if it could feed it with enough memory bandwidth, it will end up being way more expensive than a 3090. So Nvidia will balance performance increases with die size and cost, and probably go after something that performance 20-30% better than a 3080 with a theoretical price of maybe $999 and a die size closer to 400mm^2. Hopper will still get a massive chip, but that's because Hopper will only go into supercomputers and maybe workstations and those can handle the $15,000 price tag.
Sure, he got some things wrong, but he also gets a lot of them right. Price is the easiest one to get wrong and more so these days, so I don't diss the guy because of that.

It's not like Coreteks and all the press media (including this site) that believed RDNA2 will be 2080Ti level and were wrong, while in the mean time MLiD always said to expect at least 3080 performance and we got actually 3090 performance.

I don't really want to continue a debate now on how many times he was wrong and how many times he was right. The fact is he's getting better and better and has more reliable sources now, than in 2019.

About Lovelace and RDNA3: if nvidia does not push to the absolute limits both the size of the chips, the core speeds, IPC and everything else in between, they will lose badly to RDNA3 which can scale higher much easier since it's MCM. That's why we got a 450W 3090 Ti now, because nvidia not only does not like to lose, they don't like even parity - they (as in, Jensen more exactly) "need" to beat the opponent at every metric possible.

This is the same reason why after a 450W 3090 Ti which is a psychological move too, so we get used to even higher power usage, we should not complain when Lovelace will come with 550W or more power usage to fight with their monolithic design an MCM RDNA3 with lower power draw. Although the top RDNA3 chips will also have a higher power draw than RDNA2, maybe 450W, but still lower than what Lovelace will require - and still nvidia will lose vs RDNA3, at least in raster.

Depending on how hard nvidia pushes everything on that monolithic chip will be the case of losing badly or not so bad vs RDNA3.

As for prices, expect even higher ones across the board and who ever wins the next gen GPU war, will ask at least $2000 MSRP if not more for the top halo GPU and we can expect real prices even higher in shops.
 
As for prices, expect even higher ones across the board and who ever wins the next gen GPU war, will ask at least $2000 MSRP if not more for the top halo GPU and we can expect real prices even higher in shops.
Tell us again why intel should worry?
If nvidia and AMD are forced to burn all of their wafers on halo products then intel will be extremely cozy selling mid and low end GPUs at good for them prices.
Even if mid/low end amd and nvidia cards will be better value for money, IF, intel will still be the only one with enough product on the market and people will buy them by default instead of waiting for years, unless intel also goes for insanely large GPUs that use up way too much wafer...
 

ASK THE COMMUNITY