News Intel's CEO Fires Back at 3nm Delay Rumors

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

bit_user

Polypheme
Ambassador
Not for the supporting soft/hardware ecosystem though.
For the investors and stock market maybe.
Okay, so let's talk dGPUs. Intel's marketshare is tiny (consumer) to nonexistent (server). Alchemist and Ponte Vecchio were extremely late, and now they've canceled Rialto Bridge and Lancaster Sound. If they don't talk about future consumer and server GPU plans, who is going to invest the resources in supporting Intel's GPUs, especially in server and HPC apps?
 
who is going to invest the resources in supporting Intel's GPUs, especially in server and HPC apps?
As already said, intel is one of the biggest software houses in the world, they don't need to rely on open source or on console makers so that others do their work for them.
Al they need to do is to make a product that performs well enough.
If they want they will send people to the companies to help them just like they already do for anything else and like nvidia also does.
Also intel has a history of supporting open APIs so their cards don't need any extra support if the server company can't or won't provide it.
Also the oneAPI that intel pushes is basically the new directx for this sort of things, as long as you support oneapi any hardware will work.

Intel is the one company out of all of them that gives people access to the full suit of tools needed to code anything for anything.
https://www.intel.com/content/www/us/en/developer/tools/oneapi/toolkits.html
 
  • Like
Reactions: KyaraM

bit_user

Polypheme
Ambassador
As already said, intel is one of the biggest software houses in the world, they don't need to rely on open source or on console makers so that others do their work for them.
There's a lot of GPU-accelerated software Intel doesn't write.

If they want they will send people to the companies to help them just like they already do for anything else and like nvidia also does.
Nvidia dominates all GPU markets. Intel is currently a nobody, in the datacenter/HPC GPU market, and their gaming dGPUs don't even rank on the Steam survey.


So, if I'm an independent software vendor, why would I even want someone from Intel to monkey around with my code and add more codepaths that I have to test and support?

Also intel has a history of supporting open APIs so their cards don't need any extra support if the server company can't or won't provide it.
True, but for software that currently supports CUDA, they have nothing to gain by adding OpenCL if Intel remains functionally absent from the marketplace. AMD is pushing their CUDA-clone they call HIP, and the state of their OpenCL support is dubious. On Nvidia cards, CUDA performs best. So, it's of no value to a software vendor to have an OpenCL path unless they believe Intel is going to be a significant player. That's why Gelsinger needs to sell them on that idea.

Intel is the one company out of all of them that gives people access to the full suit of tools needed to code anything for anything.
https://www.intel.com/content/www/us/en/developer/tools/oneapi/toolkits.html
AMD has https://gpuopen.com/

Nvidia has https://developer.nvidia.com/open-source
 
Last edited:

SiliconFly

Prominent
Jun 13, 2022
99
37
560
Exactly.


First, as the public face of Intel, it's his job to try an spin the negative things so they don't seem so bad, while getting investors, customers, and partners to buy into Intel's future plans. Apart from a handful of examples (e.g. NUCs), Intel doesn't sell end-user products. Even though you can buy a retail boxed CPU or GPU, it's worthless without a software and hardware ecosystem supporting it. If their partners lose faith in Intel, then supporting those products becomes a lower priority and that will have downstream impacts.

Second, when did he actually disparage TSMC? Specifically, since they've become a TSMC customer?


Eh, not sure about that. Intel doesn't make much from datacenter networking, especially if you don't count their recent Barefoot Networks acquisition. They made a play for the space with OmniPath, but the market rejected it. Hence, the acquisition.

And other than that, there's not much overlap between Intel and Nvidia, in the datacenter. Not since Intel killed Xeon Phi. Their "Datacenter GPU Xe Max", or whatever gobbledygook name they now have for Ponte Vecchio, is nowhere to be found, outside of prior HPC deployments. It appears to be DoA with rumored end-product yields deep into the non-viable territory, hence the cancellation Rialto Bridge.

Going forward, Nvidia's Grace CPUs seem squarely targeted at edging Intel out of GPU nodes in datacenter and HPC markets. However, that's still a fraction of the datacenter CPU market, and Grace is probably still in its production ramp phase.


What's interesting is that Intel seems to have fallen into the same trap as AMD, by trying to make a GPU that's (nearly) all things to (nearly) all people. They tried to tackle:
  1. Traditional raster performance
  2. Ray tracing
  3. GPU compute (fp32)
  4. Deep learning
  5. Video compression
The only thing they left out was HPC, which is reserved for their "Xe Max" line. And they actually did comparatively well on most of those points. Sadly, it came at the expense of #1, and that's the main thing it had to do well!

In fact, AMD was at least smart enough not to unduly compromise RDNA by focusing too hard on ray tracing or deep learning. The former is actually starting to become a liability, but I think it was a good call up to at least RDNA 2.

Had Intel not bitten off more than they could chew, perhaps Alchemist could've been a fair bit more successful. You can't deny that the timing of their launch couldn't have been much worse, although that's partially on them (was supposed to launch at least a year earlier).

Anyway, that's why I'm not as dismissive of their dGPU efforts as some.


True. And this is why I keep saying we have to wait and see whether Intel has really turned a corner, and not just seize on their impressive efforts to capitalize on their Intel 7 node as proof that they have.

I's say RDNA 3 was a bold step into the future. The first chiplet based GPU. So naturally, we can expect some issues with the new architecture. Meteor Lake may also suffer due to the same.

But once they master their new designs, we can expect faster, efficient, better & hopefully cheaper products in the future. 1st gen is always troublesome. They deserve more time!
 

bit_user

Polypheme
Ambassador
I's say RDNA 3 was a bold step into the future. The first chiplet based GPU. So naturally, we can expect some issues with the new architecture. Meteor Lake may also suffer due to the same.
IMO, Meteor Lake is relatively conservative. If you look at where they put the tile boundaries, they're at points which were historically separate physical packages.

unmSFahCFp39WUfEjyuk7a.jpg

Source: https://www.tomshardware.com/news/i...ech-for-meteor-lake-arrow-lake-and-lunar-lake

That's nothing, compared to Ponte Vecchio, which is comprised of 63 chiplets!

The L2 cache resides in separate dies, with an aggregate bandwidth of 13 TB/s. That's about 2.5x the aggregate bandwidth the MCD tiles, in AMD's 7900 XTX!

But once they master their new designs, we can expect faster, efficient, better & hopefully cheaper products in the future. 1st gen is always troublesome.
Hopefully, they'll benefit from their experience with Ponte Vecchio. It seems one lesson they took from such an ambitious design was not to do it, again. Presumably, that's why they scrapped its successor - Rialto Bridge.
 

SiliconFly

Prominent
Jun 13, 2022
99
37
560
IMO, Meteor Lake is relatively conservative. If you look at where they put the tile boundaries, they're at points which were historically separate physical packages.
unmSFahCFp39WUfEjyuk7a.jpg

That's nothing, compared to Ponte Vecchio, which is comprised of 63 chiplets!
6qoBBw7WCDa2jZPWPkhMLV.png

The L2 cache resides in separate dies, with an aggregate bandwidth of 13 TB/s. That's about 2.5x the aggregate bandwidth the MCD tiles, in AMD's 7900 XTX!


Hopefully, they'll benefit from their experience with Ponte Vecchio. It seems one lesson they took from such an ambitious design was not to do it, again. Presumably, that's why they scrapped its successor - Rialto Bridge.

Actually, it's hard to say whether Meteor Lake will gain anything from Ponte Vecchio experience. AMD had lots of chiplet experience with Zen, but they couldn't translate it directly to RDNA 3. Same might hold true for Intel (reverse in this case). The only advantage for Intel is, they're been cooking MTL for more than two years now and possibly working out the kinks & going thru incremental improvements before mass production. Looks like they want to get MTL right on the first go.
 
Last edited:
  • Like
Reactions: KyaraM