News Intel Looks to Create Gold With its Arc Alchemist and Xe HPG Architecture

InvalidError

Titan
Moderator
interesting. Any AIB board maker confirmed they'd be working with intel GPU yet?
Leaks earlier this year pointed more toward DG2 being a mobile part at first, so it could be a while before it becomes available for desktops in any meaningful quantities. Another leak alleges that Intel only ordered a few million GDDR6 chips and with each GPU requiring at least four of those, that would point towards an initial production batch well under a million total units.

If ARC follows in i740's footsteps, the desktop parts will either be manufactured in-house or on-contract and there won't be custom models.
 
Leaks earlier this year pointed more toward DG2 being a mobile part at first, so it could be a while before it becomes available for desktops in any meaningful quantities. Another leak alleges that Intel only ordered a few million GDDR6 chips and with each GPU requiring at least four of those, that would point towards an initial production batch well under a million total units.

If ARC follows in i740's footsteps, the desktop parts will either be manufactured in-house or on-contract and there won't be custom models.
Intel got Asus and a couple of other AIB partners to make the DG1, which is basically the modern equivalent of the i740. Except, DG1 was always planned as paving the way for DG2, aka Alchemist. If Intel tries to do all the cards internally, I think this will be a bust. If it can do a million Arc GPUs, though, that would actually be a sizeable number. Officially, no partners have been announced, but I suspect the same companies that made DG1 models will also do Arc, and possibly other companies besides. If Arc is good, at least, which is the real question.
 
  • Like
Reactions: gargoylenest

VforV

Respectable
BANNED
Oct 9, 2019
578
287
2,270
But RTX cards already make up a huge chunk of the high-end gaming PC market, probably around 80% or more
The high end, which in itself is less than 15% of all the entire GPU gaming market, so really a minority. Let's get that fact straight.

That being said I'm actually impressed with the quality of XeSS, but I will still need to see actual FPS performance for that quality level. And I like that they want to go open source at some point, unlike nvidia. That's already a +1 from me.

Here is the unlisted XeSS short video:
View: https://www.youtube.com/watch?v=Hxe4xFKMqzU
 
The high end, which in itself is less than 15% of all the entire GPU gaming market, so really a minority. Let's get that fact straight.

That being said I'm actually impressed with the quality of XeSS, but I will still need to see actual FPS performance for that quality level. And I like that they want to go open source at some point, unlike nvidia. That's already a +1 from me.

Here is the unlisted XeSS short video:
View: https://www.youtube.com/watch?v=Hxe4xFKMqzU
We're talking about a GPU that will very likely cost more than $400, at least for the higher spec versions. Hence, it's high-end and entirely relevant. The 60% of the market (on Steam) that seems to mostly play lightweight titles doesn't need XeSS and won't get it. It's the high-end games that push hardware that need things like DLSS, FSR, and/or XeSS. Nvidia dominates the high-end, and even the mid-range (though not with RTX on mid-range). So, when a developer has a choice between:

A) Nvidia DLSS, catering to a large chunk of the potential audience
B) AMD FSR that works with everything but doesn't look quite as good
C) Intel XeSS, which may work with everything but comes from the company that has never made a successful dedicated GPU

...Which will the developer choose? If DLSS looks best and already covers most of your intended audience, it's not the wrong choice. FSR might be fine as well, XeSS is currently totally unproven, just like Intel's track record on GPUs. I do hope XeSS works well and gains traction and opens up the potential for non-RTX GPUs to get DLSS-like image quality and performance. I'm just skeptical that Intel will actually make that happen.
 

Giroro

Splendid
When Raja Koduri moved to Intel, he got to start making better money. AMD got to start making better graphics cards, and Intel is getting a GPU that they'll be able to rebadge and resell for the next 4+ generations.
It was your classic win-win-win.
 

VforV

Respectable
BANNED
Oct 9, 2019
578
287
2,270
We're talking about a GPU that will very likely cost more than $400, at least for the higher spec versions. Hence, it's high-end and entirely relevant. The 60% of the market (on Steam) that seems to mostly play lightweight titles doesn't need XeSS and won't get it. It's the high-end games that push hardware that need things like DLSS, FSR, and/or XeSS. Nvidia dominates the high-end, and even the mid-range (though not with RTX on mid-range). So, when a developer has a choice between:

A) Nvidia DLSS, catering to a large chunk of the potential audience
B) AMD FSR that works with everything but doesn't look quite as good
C) Intel XeSS, which may work with everything but comes from the company that has never made a successful dedicated GPU

...Which will the developer choose? If DLSS looks best and already covers most of your intended audience, it's not the wrong choice. FSR might be fine as well, XeSS is currently totally unproven, just like Intel's track record on GPUs. I do hope XeSS works well and gains traction and opens up the potential for non-RTX GPUs to get DLSS-like image quality and performance. I'm just skeptical that Intel will actually make that happen.
I disagree with some things:

1. $300-$400 is the new low end now and $500-$600+ is the middle tier now. Based on reality and not MSRP fairy tales that we will (probably) never see again. Next gen GPUs will be even more expensive, just wait and see.
So your definition of high end is wrong and not accurate anymore and frankly I'm amazed how you do not know/see this by now... :rolleyes:

2. You say only high end needs DLSS/XeSS/FSR. Again wrong, so many called high end GPUs even today 3080/3090/6800XT/6900XT have issues playing at over 60fps Ultra + RT in 1080p (!) in a few games (sure there are more in which they do great, but some even now are barely playable), hence they do need those techniques to achieve smooth 60+ fps. That means the GPUs below them absolutely need those DLSS-like features for 60fps. Again this is 1080p, not 4k.

People are so shortsighted, they really need to wake up and realize that if there are 2-3 games TODAY that make a 3090 cry in 1080p Ultra + RT and forces it to use DLSS to have above 60fps, that means the games of tomorrow, next year and 2 years from now, made exclusively for NEXT gen, not cross-gen as we have them now, will destroy these $1000-$2000+ GPUs! So yeah, from 3080 to 3060 and the equivalent tiers from AMD and soon Intel, all of them at all resolutions will need DLSS or XeSS or FSR.

3. Something tells me this time Intel is not lying and the quality of XeSS is at least as good as DLSS 2.0. Maybe DLSS 2.2 still has a small edge on it, but I suspect it will be insignificant to matter. Intel is absolutely "forced" to make this tech as good as DLSS and as open as FSR if they want to breakthrough the GPU market and disrupt it, and I think they will.

Having now XeSS and FSR, DLSS will become less and less relevant to the point that if Jensen does not make it open source too, if will be a non-factor in 2 years time.
 

Howardohyea

Commendable
May 13, 2021
259
64
1,790
I disagree with some things:
1. $300-$400 is the new low end now and $500-$600+ is the middle tier now. Based on reality and not MSRP fairy tales that we will (probably) never see again. Next gen GPUs will be even more expensive, just wait and see.
So your definition of high end is wrong and not accurate anymore and frankly I'm amazed how you do not know/see this by now...
Not a lot of people spend that much money every generation to upgrade and many more doesn't even care about computers. A lot of people is still using Pascal and low end Turing chips which is way lower than 300 bucks before the GPU market got screwed up

Anyways I think Intel made the most out of their manufacturing process situation, booking TSMC. Seems to me Intel rather loose profit than loose market share, but I have no idea what the heck they're thinking for the 11900K.
 
  • Like
Reactions: renz496
Having now XeSS and FSR, DLSS will become less and less relevant to the point that if Jensen does not make it open source too, if will be a non-factor in 2 years time

if you look on the info available to us so far XeSS it is a lot more complicated than saying "it will work on any hardware". the are some caveat intel has to do to make it "work" on other GPU including some older one. also open source does not makes it more easy to use or develop.

the best version of XeSS will still only available on intel GPU even if they are open source. remember intel is utilizing their own XMX instruction so XeSS can take advantage of specific hardware inside intel GPU that is very similar to nvidia tensor core. this XMX instruction right now is proprietary to intel GPU. the more universal version of XeSS will utilize DP4a instruction. in a simple term this will be the non-tensor core accelerated version of XeSS and will definitely be much slower to process the AI acceleration than XMX version of XeSS.

so which GPU have DP4a instruction? for intel that would of some of their more recent iGPU. on nvidia side that will be pascal although it seems DP4a only supported on GP106 and above. on AMD side RDNA2 seems have the capability to support DP4a. and even this still raise another question because to access DP4a on nvidia card you need to do it through CUDA. on AMD side it is still unknown because both RDNA and RDNA2 still have no ROCm support. so even if DP4a version of XeSS should work on more GPU nvidia and AMD probably have to do the work themselves so XeSS can access DP4a instruction on their hardware. i doubt it is as simple as moving some files (ala FSR) and it will work on just about anything. if they were that easy intel probably have already shown the demo XeSS on non intel hardware to showcase XeSS strength over something like DLSS.

so what about XeSS version that can utilize tensor core like hardware? intel can open source XeSS and give AMD and nvidia the access to XMX instruction that they use on their GPU. but how useful it will to nvidia and AMD? those instruction are made to run on intel hardware. maybe with some modification it can work on nvidia tensor core. but that modification work most likely need to be done by nvidia themselves. but will they do it? in a way it is similar to how nvidia CUDA can be made to run on non nvidia hardware. it can even run on x86 processor. did intel and AMD work the effort to make CUDA to run well on their hardware? this is also the problem DLSS will going to have even if they were open source.
 
  • Like
Reactions: VforV

VforV

Respectable
BANNED
Oct 9, 2019
578
287
2,270
if you look on the info available to us so far XeSS it is a lot more complicated than saying "it will work on any hardware". the are some caveat intel has to do to make it "work" on other GPU including some older one. also open source does not makes it more easy to use or develop.

the best version of XeSS will still only available on intel GPU even if they are open source. remember intel is utilizing their own XMX instruction so XeSS can take advantage of specific hardware inside intel GPU that is very similar to nvidia tensor core. this XMX instruction right now is proprietary to intel GPU. the more universal version of XeSS will utilize DP4a instruction. in a simple term this will be the non-tensor core accelerated version of XeSS and will definitely be much slower to process the AI acceleration than XMX version of XeSS.

so which GPU have DP4a instruction? for intel that would of some of their more recent iGPU. on nvidia side that will be pascal although it seems DP4a only supported on GP106 and above. on AMD side RDNA2 seems have the capability to support DP4a. and even this still raise another question because to access DP4a on nvidia card you need to do it through CUDA. on AMD side it is still unknown because both RDNA and RDNA2 still have no ROCm support. so even if DP4a version of XeSS should work on more GPU nvidia and AMD probably have to do the work themselves so XeSS can access DP4a instruction on their hardware. i doubt it is as simple as moving some files (ala FSR) and it will work on just about anything. if they were that easy intel probably have already shown the demo XeSS on non intel hardware to showcase XeSS strength over something like DLSS.

so what about XeSS version that can utilize tensor core like hardware? intel can open source XeSS and give AMD and nvidia the access to XMX instruction that they use on their GPU. but how useful it will to nvidia and AMD? those instruction are made to run on intel hardware. maybe with some modification it can work on nvidia tensor core. but that modification work most likely need to be done by nvidia themselves. but will they do it? in a way it is similar to how nvidia CUDA can be made to run on non nvidia hardware. it can even run on x86 processor. did intel and AMD work the effort to make CUDA to run well on their hardware? this is also the problem DLSS will going to have even if they were open source.
Thanks for the info.

I believe out of them 3 the best one will be the most adopted one, I'm talking 5 years from now. I don't see all 3 of them having the same adoption and success over the years. Like equality and equilibrium between DLSS vs XeSS vs FSR, it's not gonna be that.

I also think the best one will be open source and being open source if not that GPU generation, then the next one after it was made open will be accelerated in hardware by all 3 GPU makers. Nvidia and Intel have hardware acceleration and AMD is supposed to have starting with RDNA3. So the hardware will be ready for that to happen.

Maybe I'll be proven wrong, but this is what I think will happen. They will co-exists for some years until one will be the top one.