News Intel CEO Pat Gelsinger: I hope to build chips for Lisa Su and AMD

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Yes, I cannot see AMD saying, "Intel, here are all of our trade secrets. Thanks!"
Actually I am thinking they will, because yes they are showing their hand a little, they also get to see Intel's full set of cards. I do think the OP is on to something. The only reason Intel would open it's foundry's is to keep cost reasonable to stay on the curve. AMD doesn't have to do anything other than design new chips, intel has to design new chips, build new fabs, perfect those fabs and still stay competitive. Now they can and potentially AMD could benefit from this.
 
Based on what exactly? Compiling for ARM on Linux is very easy and your talking about many manufactures, not just two. Performance varies greatly, but the ability to develop and run software on them is not an issue.
Yes, exactly because there are many manufactures, not just two, you would have to compile or code for every single arm cpu out there and every arm gpu and every combo thereof.
Unless there will be a tech giant making one version of ARM CPUs, with different SKUs, there will be too many differences to compile for one target.

That's also why a lot of linux software comes in source code and you have to compile it on your platform because the compiler of your platform will do all the adjustments.
That's fine for linux since it's an OS for advanced users but the general PC market is going to stay waaaaay back on that one.
 
  • Like
Reactions: Order 66
Yes, exactly because there are many manufactures, not just two, you would have to compile or code for every single arm cpu out there and every arm gpu and every combo thereof.

That's not how it works and the whole reason drivers to exist.

On the CPU side there is a standard ISA, so unless the CPU uses some extended feature not supported by ARM, there is no need for recompilation. For iGPU using the proper system APIs with a proper video driver will eliminate any need to recompile your program. This is why discrete GPUs work without recompiling... Did anyone need to recompile to support Intel ARC? No, they simply installed the driver.

Whether it's Windows or Linux it makes no difference so long as the OS supports the ISA and proper drivers are created for the iGPU.
 
Credit to @PaulAlcorn for (maybe?) quoting Bill Murray from Ghostbusters (1984).
Dr. Peter Venkman: Human sacrifice, dogs and cats living together... MASS HYSTERIA!​

Given that, I can overlook a slightly imperfect analogy....
: )


Well, Intel did announce intentions to start making ARM-powered phone SoCs, around that time. They had announced plans to open up their fabs to external customers even further back, though I'm not sure if much came of that iteration.


Has anyone looked at whether Intel 7 provides more density than their 10 nm SF node, whether Intel 3 provides better density over Intel 4, or whether Intel 18A provides better density over Intel 20A?

I have a suspicion that Intel 7 is really just Intel 10+++, Intel 3 would've previously been called Intel 4+, and that the Intel of old would've just said Intel 18A was Intel 20A+.

Regardless of what they call them, it's good to see Intel's process nodes improving. I just think what they accomplished is probably a little less miraculous than it sounds. Especially, considering how underwhelming Meteor Lake has been, performance-wise.
Because Intel 7 really is 10++
And funny enough they had to relax design rules for the current CPUs because the original specs wouldn't hit decent frequencies (ice lake lmao)
In a nutshell, original Intel 10 was too ambitious. Now its less dense than original Intel 10.
 
  • Like
Reactions: bit_user
Eventually nodes (and their costs) will stagnate,
Huh? What do you mean by that?

but architectural improvements will increase (agree with Huang and Keller))
Again, I'm not clear what you mean by this.

BTW, the only claim I've heard Keller make is that he believes it should be possible to reach much greater densities. That just means you can pack more logic per mm^2, but doesn't necessarily say anything about efficiency or cost. Density doesn't translate very well into single-threaded performance, if you consider that Golden Cove has at least 10x the transistors of a Sandybridge core, but single-threaded performance only improved by about 2x.
 
  • Like
Reactions: Order 66
That's not how it works and the whole reason drivers to exist.
And how exactly does that help with fragmentation?!
Driver specific optimizations are extremely common on the PC for many, if not all, games.
My whole point was that you would have to optimize for many hardware bases, if you straight up code for it or for the drivers doesn't make a huge difference, at least from the idea, in practice it might be very different but it doesn't change the fact that it would involve a lot of work.
 
It wasn't me that started with fragmentation, I just commented on it.
Ah, yes. I see it was @usertests. Sorry for the confusion.

Regardless, I think there's a better chance of having a useful discussion about something, if we can first agree what it is we're talking about. The other reason I asked is that I just don't see what supposed fragmentation exists.

I agree with @JamesJones44 that the ISA-level issues with ARM are certainly no worse than with x86. They do add instructions, but in a very linear fashion, keeping the number optional features to a minimum. Just like targeting baseline x86-64, if you target ARMv8-A, which is their baseline implementation of AArch64, it'll run on every 64-bit ARM CPU. I actually have a nit-pick with one comment James made on this point:

so unless the CPU uses some extended feature not supported by ARM, there is no need for recompilation.

ARM holds the line on this, no exceptions. Licensees are not allowed to add their own instructions or extensions to ARM CPUs. Period. This is precisely to avoid the "ecosystem fragmentation" problem.

As for tuning code, in terms of the instruction mix & scheduling, I think it's interesting that Apple hasn't updated GCC's cost tables for their CPUs, in ages. Their CPU cores are good enough at rescheduling instructions as needed. I think it's generally the case that you don't gain much from tuning for a specific CPU, as long as we're not talking about big things like SVE/AVX-512. In my experience, it's usually not more than a couple %.
 
You never said what you mean by fragmentation. How about you first illustrate what you're talking about, at least as far as it supposedly applies to ARM CPUs.
The share of ARM CPUs in cloud technologies is growing: Amazon, MS, NV, Ampere, Alibaba, Google, etc. are all processor developers. This list is only expanding. In this case, there is no fragmentation of the code base, which has already been written about.
 
In the very first sentence, it says:

"Arm is opening up its instruction set to customers’ customized instructions for Cortex M cores."

Cortex M is their microcontroller line of cores. These are for tiny, embedded devices. Such devices tend to have custom software/firmware written specifically for them that's not accessible for the end user to customize. Hence, broad compatibility between devices is a non-issue.

More importantly, Cortex M isn't remotely related to what we're talking about. It has its own ISA that's separate from the AArch64 ISA used by all modern, ARM-based phones, tablets, Chromebooks, and server CPUs.

The ARM Cortex-M is a group of 32-bit RISC ARM processor cores
...
The ARM Cortex-M family are ARM microprocessor cores that are designed for use in microcontrollers, ASICs, ASSPs, FPGAs, and SoCs. Cortex-M cores are commonly used as dedicated microcontroller chips, but also are "hidden" inside of SoC chips as power management controllers, I/O controllers, system controllers, touch screen controllers, smart battery controllers, and sensor controllers.

The main difference from Cortex-A cores is that Cortex-M cores have no memory management unit (MMU) for virtual memory, considered essential for "full-fledged" operating systems. Cortex-M programs instead run bare metal or on one of the many real-time operating systems which support a Cortex-M.

Source: https://en.wikipedia.org/wiki/ARM_Cortex-M
As stated above, they cannot run a desktop or server OS.

You just cherry-picked the answer you wanted to find and didn't think or bother to sanity-check it. Thanks for wasting our time.
 
In the very first sentence, it says:

"Arm is opening up its instruction set to customers’ customized instructions for Cortex M cores."

Cortex M is their microcontroller line of cores. These are for tiny, embedded devices. Such devices tend to have custom software/firmware written specifically for them that's not accessible for the end user to customize. Hence, broad compatibility between devices is a non-issue.

More importantly, Cortex M isn't remotely related to what we're talking about. It has its own ISA that's separate from the AArch64 ISA used by all modern, ARM-based phones, tablets, Chromebooks, and server CPUs.

The ARM Cortex-M is a group of 32-bit RISC ARM processor cores
...
The ARM Cortex-M family are ARM microprocessor cores that are designed for use in microcontrollers, ASICs, ASSPs, FPGAs, and SoCs. Cortex-M cores are commonly used as dedicated microcontroller chips, but also are "hidden" inside of SoC chips as power management controllers, I/O controllers, system controllers, touch screen controllers, smart battery controllers, and sensor controllers.

The main difference from Cortex-A cores is that Cortex-M cores have no memory management unit (MMU) for virtual memory, considered essential for "full-fledged" operating systems. Cortex-M programs instead run bare metal or on one of the many real-time operating systems which support a Cortex-M.

Source: https://en.wikipedia.org/wiki/ARM_Cortex-M
As stated above, they cannot run a desktop or server OS.

You just cherry-picked the answer you wanted to find and didn't think or bother to sanity-check it. Thanks for wasting our time.
Are they ARM CPUs?
Does ARM allow custom instructions on them?
The answer to both of those is yes.

Also just the existence of these microcontroller CPUs as a second type of CPUs is fragmentation in itself.
Just look up what people do for the pi pico, that has a cortex M0 CPU in it, and there is a ton of development for it that doesn't work on anything else.
 
Are they ARM CPUs?
Does ARM allow custom instructions on them?
The answer to both of those is yes.
You took that out of context, which was:

Never buy the ARM/RISC-V hype, they are fragmented disasters for PCs.

The only reasons to take something out of context are:
  1. You're more interested in arguing than getting to the truth of a matter.
  2. You're trying to spread misrepresentations, in order to work an agenda.

In either case, it amounts to posting in bad faith.

Also just the existence of these microcontroller CPUs as a second type of CPUs is fragmentation in itself.
This is ignorant. Microcontrollers existed for probably 50 years, at least. That fragmentation exists due to the fundamental nature of microcontrollers, regardless of whether ARM serves that market, because there are things they simply cannot do.

Also, the fact that you're still working the "fragmentation" claim, even though it's been shown to be irrelevant to the topic at hand is yet more evidence that you're posting in bad faith.

Just look up what people do for the pi pico, that has a cortex M0 CPU in it, and there is a ton of development for it that doesn't work on anything else.
The Raspberry Pi Foundation makes the Pico for things you can't do with a regular Pi. Much of that comes down to the cores, themselves. They opted to use microcontrollers, because they're low-power, cheap, self-contained, and plenty fast for many automation or sensing tasks. Furthermore, deploying a full-blown OS, when it's really not needed, can open you up to hackers. In areas like IoT and automation, there's something to be said for using just enough compute to do the job, because the more complex your solution is, the more points of failure exist.
 
And how exactly does that help with fragmentation?!
Driver specific optimizations are extremely common on the PC for many, if not all, games.
My whole point was that you would have to optimize for many hardware bases, if you straight up code for it or for the drivers doesn't make a huge difference, at least from the idea, in practice it might be very different but it doesn't change the fact that it would involve a lot of work.
It's up to the GPU developer to optimize those drivers. From a software company point of view it's not an issue. The fragmentation between Nvidia, Intel and AMD is due to non-starndard features. We are also talking about iGPU here, gamers are never going to optimize for iGPU so this whole fragmentation argument is moot.
 
I'm wondering how much would it cost to contract out to Intel for making, say, 100,000 chips? Not as complex as a CPU but maybe a chipset using close to modern technology.
They don't charge per chip, they charge for masks and wafers. Probably >$10M for a set of masks, then $20,000 per wafer in a newish node. A wafer can have anywhere from 100 to 1000 chips depending on size.
 
Status
Not open for further replies.