News Raja Koduri Possibly Hints at June 2020 Intel Xe Graphics Card Release

hannibal

Distinguished
Good! Next year seems to be really interesting in GPU department. Three players now!

CPU part is not so interesting. AMD make improvements... Intel is still struggling. Lets see if AMD can buy enough production capacity... Most likely not, so Intel will sell well because there will not be enough AMD CPUs available to all those who wants to have them. But those AMD improvements seems promising, so demand can be huge compared to production!
 

nofanneeded

Respectable
Sep 29, 2019
1,541
251
2,090
Good! Next year seems to be really interesting in GPU department. Three players now!

CPU part is not so interesting. AMD make improvements... Intel is still struggling. Lets see if AMD can buy enough production capacity... Most likely not, so Intel will sell well because there will not be enough AMD CPUs available to all those who wants to have them. But those AMD improvements seems promising, so demand can be huge compared to production!

Will never be three players . Intel will focus on the onboard graphics , I doubt that they can compete in Graphics cards at all . they need many generation to reach . not from the first Gen for sure.
 

kinggremlin

Distinguished
Jul 14, 2009
574
41
19,010
Will never be three players . Intel will focus on the onboard graphics , I doubt that they can compete in Graphics cards at all . they need many generation to reach . not from the first Gen for sure.

They're not likely to be competing with the xx80Ti's of the world out of the gate, but they should be capable of a solid midrange and lower cards which is where the majority of consumers are. From there it's a question of how will they price it. If they do what AMD is doing, pricing just slightly below Nvidia with a worse feature set, then there's no benefit of a third entrant in the field. If they price well below putting pricing pressure on Nvidia and AMD, then the consumer benefits.
 

hannibal

Distinguished
It remains to be seing. Intel cpu Are not terriby well priced, but Intel ssd Are reasonable good. Intel 660 is one of the most popular ssd around because of price, so anything is possible in that price gategory. Where do They aim for?
Are They only interested in workstation/corporate level customers. Then Intel GPUs can be expensive and They Ride with their Name. If They Are interested in customer markets, They have to cut under Nvidia and amd, because their driver department does not have stellar reputation. Really hard to say yet.
 

kenjitamura

Distinguished
Jan 3, 2012
195
3
18,695
I hope to see Intel change the GPGPU landscape so that CUDA is no longer the de facto requirement to use many graphics accelerated programs. The only reason I'm using an Nvidia card really is that it lets me run certain upscaling programs that use deep convolutional neural networking algorithms and require CUDA.
 

kinggremlin

Distinguished
Jul 14, 2009
574
41
19,010
Intel have not done any magic in the history and will not do in the future. Nobody has had a successful first GPU card launch. First Intel GPU cards and drivers will be buggy as hell.

https://bugzilla.freedesktop.org/buglist.cgi?chfield=[Bug creation]&chfieldfrom=7d
intel-gfx-bugs
intel-gfx-bugs
...

Nvidia popularized the term GPU with the release of the Geforce 256 in 1999 being advertised as the first GPU. I'm not aware of any company entering the desktop graphics market after 1999. So who are these companies with the failed GPU launches you allude to? The last new entrant into the desktop graphics market pre-GPU was 3dfx. I don't think anyone would argue they nailed their first release the Voodoo1. So your point isn't accurate there either. Intel is not new to the graphics world, they have been producing graphics chips for over 2 decades, so I would not expect their drivers to be any less stable than AMD's or Nvidia's. And let's be honest, being as stable as AMD's launch day drivers is not really setting a very high standard.
 
They're not likely to be competing with the xx80Ti's of the world out of the gate, but they should be capable of a solid midrange and lower cards which is where the majority of consumers are. From there it's a question of how will they price it. If they do what AMD is doing, pricing just slightly below Nvidia with a worse feature set, then there's no benefit of a third entrant in the field. If they price well below putting pricing pressure on Nvidia and AMD, then the consumer benefits.
Eh, not really. Intel needs to match the performance of Turing and RDNA to be competitive, and i mean the architectures.
Sure, Intel can make lowend and midrange cards, but they're not worth it if they're on a massive die with huge power draw and heat output.
If Intel can make competitive midrange and lowend gpus then there's no reason why they couldn't upscale them to high end.
 

Soaptrail

Distinguished
Jan 12, 2015
301
95
19,420
Nice shade the writer is throwing at Intel:

Intel's response to AMD's Ryzen onslaught has typically been sluggish, largely because the company hasn't resorted to cutting prices on existing models. Instead, the company has slowly added more cores to its processor families with the release of new models, with those increased core counts equating to lower per-core pricing.
 

kinggremlin

Distinguished
Jul 14, 2009
574
41
19,010
Eh, not really. Intel needs to match the performance of Turing and RDNA to be competitive, and i mean the architectures.
Sure, Intel can make lowend and midrange cards, but they're not worth it if they're on a massive die with huge power draw and heat output.
If Intel can make competitive midrange and lowend gpus then there's no reason why they couldn't upscale them to high end.
Xe gpu's are going to be Intel's first chips on 7nm. So it is unlikely they will be power hungry room heaters. Based on Intel's 10nm escapades it's slso highly unlikely we will see massive 600-700mm+ die sizes out of the gate on 7nm. We're still waiting on high end Navi months after the midrange was released.
 

InvalidError

Titan
Moderator
Competition = getting more performance for less money !!

The time of monopolistic prices is for now, over.
That depends on how bad manufacturing lead times are going to get. If AMD and Intel can't keep up with servers and other high-margin demand growth, it will cut into availability of low-margin mainstream parts and prices will go up instead of down. The queue for TSMC's 7nm fabs could keep getting longer until TSMC's 7nm+ is ready for production so customers can target their designs for 7nm+ if they want to bypass the current 7nm queue.
 
Mar 21, 2019
2
2
15
Xe gpu's are going to be Intel's first chips on 7nm. So it is unlikely they will be power hungry room heaters. Based on Intel's 10nm escapades it's slso highly unlikely we will see massive 600-700mm+ die sizes out of the gate on 7nm. We're still waiting on high end Navi months after the midrange was released.
Intel said they would use Foveros, so will probably stitch multiple smaller dies together.
 
  • Like
Reactions: bit_user

kinggremlin

Distinguished
Jul 14, 2009
574
41
19,010
Intel said they would use Foveros, so will probably stitch multiple smaller dies together.
That should work fine for Intel's enterprise compute cards. No one yet has made that work on a gaming card and we know Nvidia won't be using it with Ampere next year and no one is predicting AMD will release such a card next year, but Intel will with their first entry into the dGPU market in 2 decades? I'm not holding my breath on that one.
 

bit_user

Polypheme
Ambassador
we know Nvidia won't be using it with Ampere next year
How do "we" know that?

and no one is predicting AMD will release such a card next year
I guess GPUs have more room to scale up, since they have more duplicated units than CPUs. As long as you're not trying too hard to ship fully-enabled chips, it seems like you should be able to scale single dies quite a bit further.

Anyway, Vega 20 (of Radeon VII) is 331 mm^2, while Navi 10 is 251 mm^2. So, that's about a 32% size difference and suggests that AMD could easily scale up Navi to >= 53 CU and sell it for <= $700.

but Intel will with their first entry into the dGPU market in 2 decades? I'm not holding my breath on that one.
Yeah, I just don't see the kind of advantage that AMD gets from doing it with CPUs, yet the downside is greater. But, we can't completely rule it out. If Intel is really intent on leap-frogging AMD and Nvidia, then just maybe...
 

kinggremlin

Distinguished
Jul 14, 2009
574
41
19,010
How do "we" know that?

Because Nvidia said it isn't financially viable with current technology.

https://www.pcgamesn.com/nvidia/graphics-card-chiplet-designs

“This gives us a bunch of technologies on the shelf that at some point in time,” says Dally, “if it became economically the right thing to do to, assemble GPUs from multiple chiplets, we basically have de-risked the technology. Now it’s a tool in the toolbox for a GPU designer.”

The interviewer then asked where the crossover point is with the industry moving down to 7nm and then onto 5nm… where is the crossover point for GPU chiplets to actually become worthwhile? To which Alben replied, “We haven’t hit it yet.”

We know Ampere is going to be on 7nm, and according to Nvidia 7nm, which the market hit months ago, is not an financial option for chiplets. The writer of the article continued on:

"But when it comes to gaming GPUs I’m not convinced we ever will. With CPUs it’s a lot easier to combine multiple chips together to work for a common processor-y goal on their specific workloads. And for GPUs simply chewing through large datasets or deep learning the hell out of something it’s a doddle too, but when your GeForce graphics card is trying to spit out multiple game frames rendered on multiple chiplets it’s a whole lot tougher."

I think pretty much everyone is in agreement that enterprise compute hardware is where we're going to see GPU chiplets first. Both because the workloads are more compatible with the technology and because the market can support much higher costs to recoup R&D and production costs. It would be extraordinary for Nvidia to release Quadro cards with chiplets between now and the end of say Q1 2020 with absolutely no leaks up to this point indicating that plan. With a rumored H1 2020 release for Ampere, then Nvidia would have to trickle the technology down to gaming cards as little as 2 or 3 months later. There is just no way that chiplets deemed financially implausible for the enterprise market will suddenly become financially viable (and technically possible) for the gaming market a couple months after a surprise enterprise release.
 
  • Like
Reactions: bit_user

kinggremlin

Distinguished
Jul 14, 2009
574
41
19,010
I guess GPUs have more room to scale up, since they have more duplicated units than CPUs. As long as you're not trying too hard to ship fully-enabled chips, it seems like you should be able to scale single dies quite a bit further.

Anyway, Vega 20 (of Radeon VII) is 331 mm^2, while Navi 10 is 251 mm^2. So, that's about a 32% size difference and suggests that AMD could easily scale up Navi to >= 53 CU and sell it for <= $700.

Remember this one?

dsc06959-100650270-large.jpg



The Navi we've seen so far does not resemble the Navi on this slide. What is the nextgen memory they were predicting? Scalability would typically imply more than one of something, but the only multi core GPU AMD has announced thus far is a pretty traditional dual core compute card.
 

bit_user

Polypheme
Ambassador
Remember this one?

dsc06959-100650270-large.jpg



The Navi we've seen so far does not resemble the Navi on this slide.
Really?

What is the nextgen memory they were predicting?
GDDR6

Scalability would typically imply more than one of something,
Or, that workloads scale better on existing resources. Possibly, that they're finally going to scale the shader count past 4096, in "big" Navi (which, to you point, we haven't yet seen).

the only multi core GPU AMD has announced thus far is a pretty traditional dual core compute card.
Oh, I don't even count the Radeon Vega Pro II Duo as a multi-core. AFAICT, it's just like the rest of the dual-GPU cards they periodically build, except I guess it has better interconnectivity than before - PCIe 4.0 x32, I think.
 

bit_user

Polypheme
Ambassador
As the a slide says, Vega had HBM2. Is GDDR6 an upgrade over HBM2 in any performance metric?
It just said "next generation", which it is, with respect to GDDR5. Within that market segment, it is the next generation after what Polaris used.

So far, it seems HBM2 is reserved for the high-end. Granted, Vega didn't turn out to be as high-end as I think they intended.

Around the time Vega launched, I think somebody estimated that AMD was probably losing money on each Vega 56 they sold. If that was even remotely true, then I don't see how we could expect to see HBM2 in a squarely mid-range card, like the 5700.

I doubt GDDR6 was what AMD had in mind when they created this slide.
Who knows, but it's strictly accurate. If they didn't mean GDDR6, then I don't know what else it would be. I don't think anybody is yet on HBM3.

Moreover, GDDR6 delivers the goods. It seems well-matched to Navi's compute. So, why would they use anything else?
 

bit_user

Polypheme
Ambassador
The RX 5700 XT has 40 CU and 256-bit GDDR6. If they scale it up to 60 CU, it'll be in the same ballpark as (or maybe a little bigger than) Vega 20. 60 CU would add 50% more compute, and should therefore require a corresponding increase in memory bandwidth.

So, here's a conjecture: "Big" Navi will be 60-64 CU and feature 384-bit GDDR6. That would give it 3840 - 4096 shaders. That should give it enough oomph to clear the RTX 2080 Super, allowing them to price it above that card and the $700 price point of Radeon VII.
 
The RX 5700 XT has 40 CU and 256-bit GDDR6. If they scale it up to 60 CU, it'll be in the same ballpark as (or maybe a little bigger than) Vega 20. 60 CU would add 50% more compute, and should therefore require a corresponding increase in memory bandwidth.

So, here's a conjecture: "Big" Navi will be 60-64 CU and feature 384-bit GDDR6. That would give it 3840 - 4096 shaders. That should give it enough oomph to clear the RTX 2080 Super, allowing them to price it above that card and the $700 price point of Radeon VII.
I've heard from a leaker (who has gotten few of AMDs products right before) that there will be 2 big Navi cards coming, one with HBM2 and one with GDDR6, likely the 5800 and 5900. Performance numbers are still somewhat of a question but the bigger one is said to be faster than the RTX 2080 ti.