News Apple Silicon Mac Pro Reportedly Lacks Upgradeable GPU, RAM

That is NO surprise as Apple is just squeezing as much as they can from its user base. The Apple Silicon Mac mini's no longer allow for user upgrades. The last Mini's to support any upgrades was the 2018 Mac mini and that was the final Intel Mini.
 
  • Like
Reactions: cyrusfox
Apple uses a custom unified memory architecture, so it's pretty normal you can't just upgrade it.

PC will have to go to a unified memory architecture too soon, because data being split between 2 pools of volatile memory is a major bottleneck on PC. This bottleneck does not exist on consoles or on Apple platforms.
 
Last edited:
Apple uses a custom unified memory architecture, so it's pretty normal you can't just upgrade it.
That only provides valid excuse for DRAM lack of upgradeability. GPU and storage being locked down is ridiculous. Storage is easy enough to side step and use an external solution (TB4 m.2 SSD array). GPU is the big handicap.

Are we stuck with only Apple GPU only forever? Are they going to partner with other accelerators for the mac pro line?
 
That only provides valid excuse for DRAM lack of upgradeability. GPU and storage being locked down is ridiculous. Storage is easy enough to side step and use an external solution (TB4 m.2 SSD array). GPU is the big handicap.

Are we stuck with only Apple GPU only forever? Are they going to partner with other accelerators for the mac pro line?
The GPU falls under similar reasoning too. The GPU cores are built into the M1/M2 chips and share the on-die memory with the CPU. I'm sure they're smart enough to figure out a way to make that memory work with a standard PCIe GPU as well, if they wanted to.

Their architecture of CPU+GPU+RAM on the same chip also means every Mac Pro would include a full set of Apple GPU cores whose price wouldn't be subtracted if you wanted to purchase a Mac Pro with a 3rd party GPU.
 
That only provides valid excuse for DRAM lack of upgradeability. GPU and storage being locked down is ridiculous.

GPU is not upgradeable for the same reason. If Apple allows discrete GPU their memory architecture will no longer be unified, as discrete GPU has its own memory.
I suppose at the very least it will require major software changes. CUDA and OpenCL have API to transfer data between CPU and GPU, which is not needed with unified memory.

PC will have to go to a unified memory architecture too soon

PCs have had unified memory for a long time. Intel introduced integrated graphics in 2010, then soon AMD released their APUs. The memory is unified with integrated GPU. Apparently, powerful GPU has to be a discrete card. AFAIK, Apple has no answer to GPU like RTX 3070 Ti and above.
 
  • Like
Reactions: bit_user
PCs have had unified memory for a long time. Intel introduced integrated graphics in 2010, then soon AMD released their APUs.

no, that's shared system memory, connected with chips through wires

unified memory is part of the chip design, connected with interposers like Apple uses, they're TSV
 
Last edited:
That is NO surprise as Apple is just squeezing as much as they can from its user base. The Apple Silicon Mac mini's no longer allow for user upgrades. The last Mini's to support any upgrades was the 2018 Mac mini and that was the final Intel Mini.
Yes, the article's headline should read "To no one's surprise, the Mac Pro's GPU and RAM reportedly can't be upgraded".
 
Apple uses a custom unified memory architecture, so it's pretty normal you can't just upgrade it.

PC will have to go to a unified memory architecture too soon, because data being split between 2 pools of volatile memory is a major bottleneck on PC. This bottleneck does not exist on consoles or on Apple platforms.
Apple did it to lower power consumption/costs and maximize profit. None of the current Apple Silicon chips come even close to 4090 memory bandwidth and won't be there for some time. Moving PCs to this model will create a GPU bottleneck and I sure as heck would not appreciate soldered ram and SSDs on my Windows desktops.
 
That is NO surprise as Apple is just squeezing as much as they can from its user base.
Really? Because the lack of add-ons seems like it's really going to hurt them on the upper-end of the range. As the article mentioned, a maxed-out 2019 Mac Pro is priced at $54k. It's hard to see how they can do the same with the new model, and still have an almost-affordable base price.

I guess whatever add-ons there are will just be that much more expensive. Maybe the wheel kit will cost $800 instead of only $400, this time.
 
PC will have to go to a unified memory architecture too soon, because data being split between 2 pools of volatile memory is a major bottleneck on PC.
LOL. No, it's not.

The main disadvantages of separate host memory and GPU memory are cost and power. For laptops, another significant factor is that it also has a larger physical footprint.

This bottleneck does not exist on consoles or on Apple platforms.
Neither consoles nor even the M1 Ultra don't perform like PCs with dGPUs. The best they can manage is RTX 3060-class performance.

So far, Apple's decisions around the M-series have been oriented towards making the best laptops.

CUDA and OpenCL have API to transfer data between CPU and GPU, which is not needed with unified memory.
At the basic level, yes. But then they added the concept of shared memory, long ago. OpenCL's SVM (Shared Virtual Memory) feature even allows pointers to be passed back and forth between the CPU and GPU code, without any address translation.

no, that's shared system memory, connected with chips through wires
LOL, wut? No, the iGPUs in Intel and AMD CPUs are fully integrated on-die, have their own stop on the ring bus, and use the chip's memory controllers just like the cores or the iGPUs in Apple's chips do.

unified memory is part of the chip design, connected with interposers like Apple uses, they're TSV
Interposers and TSVs have nothing to do with unified memory. I guess you mean in-package memory? But Unified just means the graphics uses the same physical DRAM as the CPUs, so your terminology is wrong.

And, as far as in-package memory goes, I'm pretty sure the dies are mounted on the interposer using balls, not TSVs.
 
Last edited:
  • Like
Reactions: micksh
Apple did it to lower power consumption/costs and maximize profit. None of the current Apple Silicon chips come even close to 4090 memory bandwidth and won't be there for some time.
Eh, that's going a bit too far. The Pro and Max have a 256-bit and 512-bit data bus, respectively. The former is something you don't find in virtually any other laptop, and the latter you'd only find in servers and the beefiest workstation CPU (ThreadRipper Pro or Xeon W 3000-series). It's not just about power & cost, but also space. As a result, they do indeed make kickass laptop SoCs, especially for their size, weight, and power.

The Ultra is made from 2x Max CPUs, and the resulting 1024-bit bus' 800 GB/s actually gets into the same ballpark as the RTX 4090's 1000 GB/sec figure. However, the Ultra is a multi-chip module, and therefore faces scaling problems on its GPU workloads. Plus, it has to share that bandwidth with the CPU cores, leaving 600 - 700 GB/s for graphics workloads. Even then, it tends to underperform relative to dGPUs with that much bandwidth.

Anyway, the M1 Ultra is nowhere close to RTX 3090 performance, much less RTX 4090. So, no worries there.

If Apple made it's GPU discrete, it would be twice as powerful or more.
ya cuz their teh based!!111
 
CUDA and OpenCL have API to transfer data between CPU and GPU, which is not needed with unified memory.

At the basic level, yes. But then they added the concept of shared memory, long ago. OpenCL's SVM (Shared Virtual Memory) feature even allows pointers to be passed back and forth between the CPU and GPU code, without any address translation.

But this can be horribly inefficient in some cases. Having control over where your data resides, ensures that there is no extra memory copy between CPU and GPU, which is expensive.
I know CUDA better than OpenCL, and in CUDA you better make sure that CPU data you want to share with GPU is in so called "pinned" memory. This will result 2-3x times faster memory transfer.
Pinned memory is a memory reserved by OS and guarantied to be in physical RAM, as opposed to virtual memory. So, for Apple this probably means a major OS change. Don't know if OpenCL does any better in this regard.
 
Last edited:
  • Like
Reactions: bit_user
But this can be horribly inefficient in some cases. Having control over where your data resides, ensures that there is no extra memory copy between CPU and GPU, which is expensive.
Yes, in some cases.

I know CUDA better than OpenCL, and in CUDA you better make sure that CPU data you want to share with GPU is in so called "pinned" memory. This will result 2-3x times faster memory transfer.
Pinned memory is a memory reserved by OS and guarantied to be in physical RAM, as opposed to virtual memory.
Not only would pinning ensure the memory doesn't get swapped out, but it's probably also physically contiguous. To send non-pinned memory to the GPU, perhaps the driver is doing "PIO" copies, or at least it's walking the pages to do individual page address translation and temporarily pin & upin them.