News Apple Is Struggling to Build Mac Pro Based on Its Own Silicon: Report

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

bit_user

Polypheme
Ambassador
once everyone else got access to that same node the performance difference vanished.
The perf/W advantage remains strongly in Apple's favor.

BTW, Alder Lake beat the M1 Max/Ultra on single-thread performance with what most would agree is an inferior node. Using a ton more power, though.

Phone chips work great for phones, tablets and thin notebooks,
M-series are not in any phones or tablets. That's the A-series SoCs. They do share the same CPU cores, however.

not so great when performance needs to scale deep instead of wide.
Please clarify what you mean by deep vs. wide.

Also integrated graphics can never compete with dedicated at scale due to thermodynamics.
The benchmarks can speak for themselves. Having 512-bit LPDDR5 truly does enable the M1 Ultra to perform like a dGPU.

GFX benchmarks are a little hard to find, but there are some in this review:



That 3DMark Wildlife Extreme score of 35019 puts it above most RTX 3070 Ti scores:



Regarding "thermodynamics", I'd encourage you to look at the graphics performance of laptops with dGPUs. They aren't hurt as much as you might expect by having to run in a smaller power/thermal envelope. It turns out that GPU performance scale down pretty well, as you dial back the clocks. Desktop dGPUs are clocked well above their peak efficiency point.

FWIW, here's the same benchmark running on RTX 3080 Ti (notebook) GPUs. They're not too far below the ones above:

 
Last edited:
  • Like
Reactions: cyrusfox
Please clarify what you mean by deep vs. wide.

Deep vs wide has to do with the kinds of compute you are doing and the media loves to play around with these numbers to create sensations. Deep has to do with how fast a system can process a single serial task, what we call single threaded while wide has to do with how well it can process a multitude of independent tasks. We can get really deep into the weeds about ALU / processor register abstraction, but the quick and dirty is that wide is way easier to do with deep. Stringing together masses of wide but shallow processing units can have massive performance gains, modern GPU's are just huge clusters of shallow cores. A 3080 has about 9K "cores", each is weak but together they can stomp out huge compute numbers. To reach high levels of deep performance you really only have two levers. first is raw cycle count, just cranking up clock speed with the associated ridiculous power draw. The second is to implement an expensive abstraction layer that analyses code before hand then combined prediction and reordering to execute multiple instructions at once. Intel's has spent insane amounts of money developing that abstraction layer, it's the real power to their processors and is highly tuned to that specific micro architecture design. AMD has been trying to catch up for decades and only recently has started to get close.

Now the M1 is an ARM CPU, the ARM microarchitecture is designed for making shallow processors that are extremely power efficient. Because they aren't needing to power this complex abstraction system that maximizes deep performance, they can use that power (thermal) budget to add more width, increase clock speed or just go lower power. The downside so such a design is the limit on maximum deep performance.

Like I said we can get into a whole multi-hour lecture / conversation on processor microarchitecture because it really is a fascinating field of study. Regardless there is no free lunch, everything has a trade off, optimizing for one aspect ends up lowering capability in another aspect and ultimately we design a tool for a job. Getting offended that a screwdriver doesn't hammer nails seems rather silly.
 

bit_user

Polypheme
Ambassador
Now the M1 is an ARM CPU, the ARM microarchitecture is designed for making shallow processors that are extremely power efficient.
Important nit-pick: Apple implements ARMv8-A spec, which includes AArch64 ISA. The microarchitecture is Apple's, which they designed from scratch. The Instruction Set Architecture (ISA) is ARM's. The ISA, itself, is designed for efficiency but has no bearing on how "deep" your implementation is.

The CPU cores in the M1 have 8-wide decode and 640-slot reorder buffer, whereas Intel's Golden Cove has only 6-wide decode and 512-slot reorder buffer. By pretty nearly any metric, Apple's CPU cores are the "deepest" in the industry. That's why they run so fast, even at lower clockspeeds.

Regardless there is no free lunch, everything has a trade off, optimizing for one aspect ends up lowering capability in another aspect
True. Apple cores are designed to optimize power-efficiency, which means they prioritize IPC at the expense of clock speed.

Getting offended that a screwdriver doesn't hammer nails seems rather silly.
Nobody is offended here, I think.
 
  • Like
Reactions: TJ Hooker
Important nit-pick: Apple implements ARMv8-A spec, which includes AArch64 ISA. The microarchitecture is Apple's, which they designed from scratch. The Instruction Set Architecture (ISA) is ARM's. The ISA, itself, is designed for efficiency but has no bearing on how "deep" your implementation is.

The CPU cores in the M1 have 8-wide decode and 640-slot reorder buffer, whereas Intel's Golden Cove has only 6-wide decode and 512-slot reorder buffer. By pretty nearly any metric, Apple's CPU cores are the "deepest" in the industry. That's why they run so fast, even at lower clockspeeds.


True. Apple cores are designed to optimize power-efficiency, which means they prioritize IPC at the expense of clock speed.


Nobody is offended here, I think.

That's some Apple fanfic that gets passed around, Apple did not design a uArch from scratch, its just a custom big / little design they bought under NDA. It would be like the plumber designing a bridge from scratch. Micro architecture is an extremely complicated and arcane art, you think a fashion company that specializes in UI/UX is going to suddenly develop decades of EE and processer design experience? They paid someone to do it then did what they do best, created an amazingly effective marketing campaign to tell everyone how awesome their invention was.

To demonstrate, Sony's PlayStation 5 use's a custom AMD Zen 2 APU and by custom it has core level changes to accommodate being inside a console. We don't go around saying how Sony "designed" this amazingly new efficient processor. Microarchitecture and ISA's are deeply tied together, the only way to separate them is via an abstraction layer and then only by so much. Apple did not design some new magical CPU, it's just a beefed up ARM CPU. What Apple did do was pay a ton of money to use the latest processor manufacturing process from TSMC, then fashion an amazing marketing campaign comparing a 5nm mobile CPU to Intel's 12nm ultramobile CPU and declaring victory. ARM uArch is designed to be power efficient, which limits both clock speed and instructions per clock, which makes sense as it's target is portable consumer electronics where power budgets of 5~35W is common. My point is proven quite well by Apple being unable to scale that uArch upwards from the lightweight desktop segment to the gaming / workstation segment.

On the topic of thermodynamics, yes they place an extremely harsh limit on what you can do and is why Apple wouldn't allow anyone to publish benchmarks of their product against desktop components prior to launch. A desktop CPU has a power budget of 65~125W while a low end DGPU (3050) would have a power budget of ~130W with midranges being at 200~300W. There is no way a 10~65w SoC is going to compete with a 200~350W platform that is a modern desktop, and actual high end gaming systems / workstations can hit the 800W+ range.
 
  • Like
Reactions: SSGBryan

bit_user

Polypheme
Ambassador
That's some Apple fanfic that gets passed around, Apple did not design a uArch from scratch, its just a custom big / little design they bought under NDA.
Apple had a 2022 revenue of $394 Billion, whereas ARM only had a revenue of $1.15 Billion, in 2017. Yet, somehow ARM can design CPUs but Apple can't? Apple absolutely has the will and the wherewithal to design world-class silicon. Their iPhones, alone, sell at enough volume and margins to justify it. Why is it so hard to believe that Apple actually does design their own cores?

Apple has a long history of designing their own cores. Did you know that Jim Keller even worked there, before he went back to AMD? Did you know that the senior CPU designers from Apple left to found Nuvia, which made its own custom ARM core? Anandtech has diligently documented the progression of Apple's cores. From the link below:

"Apple’s long journey into custom CPU microarchitectures started off with the release of the Apple A6 back in 2012 in the iPhone 5. Even back then with their first-generation “Swift” design, the company had marked some impressive performance figures compared to the mobile competition.​
The real shocker that really made waves through the industry was however Apple’s subsequent release of the Cyclone CPU microarchitecture in 2013’s Apple A7 SoC and iPhone 5S. Apple’s early adoption of the 64-bit Armv8 ISA shocked everybody, as the company was the first in the industry to implement the new instruction set architecture, but they beat even Arm’s own CPU teams by more than a year"​

See for yourself:

Compared to its contemporaries, the A14 was much faster and vastly more efficient. The left side of this graph shows energy usage (lower is better), while the right side shows performance (higher is better). Note that the Kirin 9000 was made on the same manufacturing node as the A14, giving Apple no advantage there.

spec2006_A14.png


It would be like the plumber designing a bridge from scratch.
I'm only interested in facts, not analogies. You made a factual claim. Please present your evidence.

Micro architecture is an extremely complicated and arcane art, you think a fashion company that specializes in UI/UX is going to suddenly develop decades of EE and processer design experience?
Acquisitions, back in the era of Steve Jobs:
To demonstrate, Sony's PlayStation 5
Irrelevant.

Microarchitecture and ISA's are deeply tied together, the only way to separate them is via an abstraction layer and then only by so much.
So, you're saying that AMD couldn't design its own x86 cores, because x86 is owned by Intel and so only Intel can design CPUs that execute it? Nonsense.

Apple did not design some new magical CPU, it's just a beefed up ARM CPU. What Apple did do was pay a ton of money to use the latest processor manufacturing process from TSMC, then fashion an amazing marketing campaign comparing a 5nm mobile CPU to Intel's 12nm ultramobile CPU and declaring victory.
All this tells me is that you have such a superiority complex over Mac users that you're unable to see the facts with unbiased eyes.

ARM uArch is designed to be power efficient, which limits both clock speed and instructions per clock, which makes sense as it's target is portable consumer electronics where power budgets of 5~35W is common. My point is proven quite well by Apple being unable to scale that uArch upwards from the lightweight desktop segment to the gaming / workstation segment.
What's so ironic about this statement is that Genuine ARM cores have indeed scaled up to 128-core server CPUs. If anything, that should argue against your claims that Apple's cores are ARM designs, but I actually think it's irrelevant because scaling really has to do mostly with SoC and not the actual cores.
On the topic of thermodynamics, yes they place an extremely harsh limit on what you can do and is why Apple wouldn't allow anyone to publish benchmarks of their product against desktop components prior to launch. A desktop CPU has a power budget of 65~125W while a low end DGPU (3050) would have a power budget of ~130W with midranges being at 200~300W. There is no way a 10~65w SoC is going to compete with a 200~350W platform that is a modern desktop, and actual high end gaming systems / workstations can hit the 800W+ range.
I updated my post with comparable benchmark scores, including a laptop RTX 3080 Ti. You should take a look.

BTW, laptop GPUs also blow a hole in your theory. I'd encourage you to spend some time comparing laptop dGPU benchmark scores with desktop dGPUs. I think you'll find that GPU performance scales down quite well to lower power and thermals.
 

SSGBryan

Reputable
Jan 29, 2021
137
116
4,760
When I see stories like this I can't help but wonder if Apple's loss of talent to Nuvia (now at Qualcomm) has hurt them for this space. They seem to keep trying to scale what they have rather than do a proper new design and that doesn't seem particularly smart.

They are trying to do everything on the cheap. That is what happens when your CEO is a bean counter.
Mac Pro 7,1 (2019) is actually a great design. It has better upgrade potential than the 5,1. More PCIe slots, TB ports, etc. The issue is... the CPU they chose. The Intel Xeon W series is a high-end CPU. I'm not sure why Apple did not choose the Xeon Bronze, Silver and Gold series back then. If you priced out an HP or Lenovo workstation with the same specs, the Mac Pro was better priced.... until you factor in the CPU. The HP can be configured with a Silver 12 core Xeon that was thousands cheaper than the MP 12/14 core.

Obsolete CPU - PCIe 3.0 slots - video cards used a propitiatory connecter, and was a Polaris card, to boot.

Nothing in it has aged well, nor will it.
 
Dec 21, 2022
1
1
15
An Epyc/Threadripper workstations would be cheaper, support more RAM and offer a ton more flexibility and upgradeability. Unless you rely on Apple software, then there's no point in this Mac Pro.
Except that it only supports the backwards-looking AMD64 architecture.
 
  • Like
Reactions: bit_user

bit_user

Polypheme
Ambassador
Obsolete CPU - PCIe 3.0 slots - video cards used a propitiatory connecter, and was a Polaris card, to boot.
The current Mac Pro launched in 2019 with Radeon VII-derived GPUs, as the high-end option. That was AMD's best GPU, at the time. It was reported on this very site:



Of course, Apple has continued to update the GPUs, first with RX 5000-derived GPUs and then with RX 6000-derived GPUs:
I think some people in this thread are a little too eager to seize on anything bad about Apple. I don't care if you do that, but you should at least make sure it's accurate, first.

Personally, I dislike the company and never owned any of its products. But, I don't use that as an excuse to tell myself or others that they're worse than they really are.
 

bit_user

Polypheme
Ambassador
Except that it only supports the backwards-looking AMD64 architecture.
Exactly. A Threadripper Hackintosh was a great alternative to the Mac Pro ...until Apple launched the M1-based products. From then, if you need to be on MacOS or ARM, you really want Apple to get it's stuff together and launch a proper ARM-based Mac Pro.

Now, what nobody has yet pointed out, is the possibility that Apple could launch an interim solution based on Ampere Altra. I get why they probably don't want to do that, which is that the Altra's single-thread performance is quite lacking. However, it would get them in the game with up to 128 cores and support for PCIe 4.0 and up to TBs of DRAM.
 
Yield can be managed by having more CPU & GPU cores than they need, and just disabling the ones with defects.

The M1 Max is estimated at 432 mm^2, which is still smaller than the AD102 die in Nvidia's 4090. The latter is not only 609 mm^2 but also uses a smaller process node (TSMC N4).
Yield can be managed yes, but integrating literally everything into the chip is not an efficient way to scale across every market which is what they're effectively trying to do. I wouldn't be surprised if Apple removed IO from the chip design (for non battery powered devices) and did something similar to Intel's MTL so they can scale it as needed for the different devices they sell.

The M1 Max was purpose-built for a dual-die configuration. You cannot possibly have such a fast interconnect without the SoC being designed for it, from the ground up.
There's no evidence to support your supposition that it was purpose built for dual-die configuration that I'm aware of, but if you can provide some by all means please do.

As for the notion that hooking the dies together is somehow bad, I don't know where you got that. There's already an industry consensus that multi-die is the best way to scale.
It's not that hooking dies together is bad, but the problem is Apple integrating literally everything into their chips. This is a very bad way to scale up and you can just look at Intel/AMD designs to see why. Both integrate as much as possible and are shifting more and more that way for anything power limited, but the same is not true for anything else.

Certainly not. You clearly need to educate yourself more, if we're to continue this conversation.

https://www.anandtech.com/show/17024/apple-m1-max-performance-review
I believe the base M1 has more in common with the A14 design wise than it does the M1 Pro, but Apple doesn't give much in the way of details. All of the M1 Pro and higher chips are exact same design as you go through the stack as that article even says:
Above the M1 Pro we have Apple’s second new M1 chip, the M1 Max. The M1 Max is essentially identical to the M1 Pro in terms of architecture and in many of its functional blocks – but what sets the Max apart is that Apple has equipped it with much larger GPU and media encode/decode complexes.

Apple added/subtracted CPU/GPU cores and memory controllers as they go through the SKUs, but didn't actually change the design of the chip. This wouldn't be a problem if they didn't also integrate everything into their chips, but they do so it is.
 

bit_user

Polypheme
Ambassador
There's no evidence to support your supposition that it was purpose built for dual-die configuration that I'm aware of,.
When the M1 Max die shots first appeared, people wondered about the structures that we then learned were there to enable the 2.5 TB/s die-to-die link. Seriously, a year on there's still nothing in the world that fast. And it's a cache-coherent link. That is not simply a "bolt-on" exercise, like linking dies via PCIe or something. I don't understand how you think they managed that if it wasn't designed in, from the start.

Here's a post discussing their cache hashing strategy, which is only something you'd do if you wanted to scale: https://www.realworldtech.com/forum/?threadid=205277&curpostid=205283

I wonder how you can so confidently assert "there's no evidence to support ..." that you're aware of. Where have you looked for such evidence? Are you in some kind of position that you could reasonably expect to know of such evidence, if it existed?

It's not that hooking dies together is bad, but the problem is Apple integrating literally everything into their chips. This is a very bad way to scale up and you can just look at Intel/AMD designs to see why.
The only case I see (and I had my own apprehensions about this), is that a workstation doesn't necessarily need to scale graphics at the same rate that it needs to scale CPU cores. So, for the "Pro" workstations, one might legitimately question the approach of simply linking 4 max SoCs. Perhaps the reason why Apple is sticking with that approach is because the market is too small to justify a completely purpose-built CPU for it.

At scales below that, having a unified-memory architecture makes a lot of sense, as demonstrated by several generations of gaming consoles. And thermals aren't even a problem with that, as long as you're building a custom case around it. And when you scale it down to a laptop, having a single package with CPU, graphics, and DRAM means the whole thing can be smaller and lighter.
 

SSGBryan

Reputable
Jan 29, 2021
137
116
4,760
The current Mac Pro launched in 2019 with Radeon VII-derived GPUs, as the high-end option. That was AMD's best GPU, at the time. It was reported on this very site:



Of course, Apple has continued to update the GPUs, first with RX 5000-derived GPUs and then with RX 6000-derived GPUs:
I think some people in this thread are a little too eager to seize on anything bad about Apple. I don't care if you do that, but you should at least make sure it's accurate, first.

Personally, I dislike the company and never owned any of its products. But, I don't use that as an excuse to tell myself or others that they're worse than they really are.

Unlike you - I was running Macs (Mac Pro - before that Power Macs) at the time, and had been for almost 20 years at that point.

The base model shipped with an RX 580. Which was 2 generations back at launch. The top end shipped with a Vega - which was 1 generation back.

There is a difference between offering and shipping.

They shipped with an obsolete CPU (14nm+++++++) on a dead Intel socket - PCIe 3.0 I/O, obsolete video cards with proprietary power connectors.

It was a $1,200USD system in a $4,800 case. And then they added non-locking wheels for $400. They were trolling the remaining user base at that point.

It was another trashcan Mac Pro - it was only good for two things - Logic & Final Cut.

The 7,1 was the final FU to anyone not using Logic or Final Cut.
 

bit_user

Polypheme
Ambassador
The base model shipped with an RX 580. Which was 2 generations back at launch. The top end shipped with a Vega - which was 1 generation back.
There are two generations of Vega. The Vega 56 & 64 version, which was made on a 14 nm process node, and the 7 nm vega20 GPU (following AMD's convention of a name + 2-digit sequence number). The Radeon Pro Vega II is what I saw advertised in the launch announcements of the 2019 Mac Pro, but if you're telling me that you know for a fact that it actually shipped with the original Vega and only offered the Vega II after that, I'll take your word for it.

Whether or not it actually launched with the Vega II version, or that was offered shortly after - my main point was that the way you described it made it sound as if the machine only ever shipped with a RX 580. The evidence appears to run contrary to that.

They shipped with an obsolete CPU (14nm+++++++) on a dead Intel socket - PCIe 3.0 I/O, obsolete video cards with proprietary power connectors.
So, leaving aside the issue of the GPU, let's talk about the CPU socket and PCIe.

It's a fact that they used a dead-end socket. Intel changes their sockets every 2 generations, and Apple launched the Pro on the second generation of CPUs for that socket. The obvious consequence being that no new CPUs would be offered for it. Perhaps they expected to do a refresh with Sapphire Rapids, by now, but got sabotaged by Intel's trainwreck of trying to bring that product to market.

PCIe 4.0 was barely a thing, back then, but they clearly should've been more forward-thinking. Again, maybe they figured they'd do a refresh in 2 years, when PCIe 4.0 peripherals were actually available. The only other option would've been to use a ThreadRipper 3000-series, which seems like it would've been a better choice, all around.

It's entirely possible that, when started designing that machine, they believed Intel would have Ice Lake ready in time. Ice Lake has PCIe 4.0 and I think was originally targeted to launch instead of Cascade Lake.

Oh, and proprietary connectors - how else do you expect Apple to trap users in their Walled Garden? IMO, that's just Apple's M.O. It's not good, but it's not new or different from anything they've done before. I think most non-Mac people tend to look at it and figure the Mac-heads must've long ago learned to accept such things.

It was a $1,200USD system in a $4,800 case. And then they added non-locking wheels for $400. They were trolling the remaining user base at that point.
Yeah, many a non-Mac user had a good laugh about that. The case seems amazing, but I'd never think of paying so much for what makes it good. Not to mention that it locks you into buying their peripherals.

I could hardly believe how Apple was fleecing its users, but I figured you've got to pay to play with Apple, and probably most people know that by now.

It was another trashcan Mac Pro - it was only good for two things - Logic & Final Cut.
Yes, I saw when it launched. I found it very intriguing. I'd never read about any real-world usage experiences with it, so it's a bit fascinating for me to hear. I definitely sympathize with your troubles.

To this day, I don't really understand why Macs have such loyalty. To me, it seems to have the dynamic of an abusive relationship.
 
Unlike you - I was running Macs (Mac Pro - before that Power Macs) at the time, and had been for almost 20 years at that point.

The base model shipped with an RX 580. Which was 2 generations back at launch. The top end shipped with a Vega - which was 1 generation back.

There is a difference between offering and shipping.

They shipped with an obsolete CPU (14nm+++++++) on a dead Intel socket - PCIe 3.0 I/O, obsolete video cards with proprietary power connectors.

It was a $1,200USD system in a $4,800 case. And then they added non-locking wheels for $400. They were trolling the remaining user base at that point.

It was another trashcan Mac Pro - it was only good for two things - Logic & Final Cut.

The 7,1 was the final FU to anyone not using Logic or Final Cut.

Best way to understand Apple is to not think of them as a technology company but as a luxury fashion company. They use the exact same business model as Gucci, Chanel and the rest, focusing on design, desirability and branding over value. Think about it, they turned a portable communication device into a fashion accessory and convinced millions of customers that they need to replace that fashion accessory every year or two to remain hip and in style.

Just look at their marketing campaign trying to convince people that Apple Branded personal computers were not "PC's".
 
  • Like
Reactions: SSGBryan

bit_user

Polypheme
Ambassador
Best way to understand Apple is to not think of them as a technology company but as a luxury fashion company. They use the exact same business model as Gucci, Chanel and the rest, focusing on design, desirability and branding over value. Think about it, they turned a portable communication device into a fashion accessory and convinced millions of customers that they need to replace that fashion accessory every year or two to remain hip and in style.

Just look at their marketing campaign trying to convince people that Apple Branded personal computers were not "PC's".
Good marketing & design is not mutually-exclusive with good technology.

I last used MacOS (for a job) in the bad, old days of MacOS 8. It had no memory protection and I'd hear from heavy Mac users that crashes would be an almost daily occurrence. Then, Apple acquired Next, which is how Steve Jobs got back in. Next brought with it a solid Mach/BSD-based operating system that became MacOS X. With that one move, they were able to leapfrog Microsoft, in terms of having a fast, stable, and secure OS.

Apple has also long been at the forefront of multimedia. QuickTime had some leading technology, even if their implementation for Windows left much to be desired. They contributed to & popularized the AAC audio codec, which is far superior to MP3. Their Darwin media streamer and Final Cut Pro software were cutting-edge, when introduced.

Then, Apple were the ones who perfected big, multi-touch displays and brought them to mainstream smart phones. They pioneered phone-based video recording and face recognition-based screen unlock. They've long been at the leading edge of phone camera technology. More recently, their ARKit has been ahead of Google and Facebook/Oculus' efforts.

For sure, Apple leans heavily on branding and design, but that doesn't mean the underlying tech isn't generally solid. Just look at leading luxury car makers, like Mercedes and BMW - they emphasize style and luxury, but it doesn't mean their vehicles aren't technically cutting-edge.

I'm trying to understand your hostility. Is it threatening to think that Apple can have style and solid technology? I don't like how much their stuff costs, I don't like how much of it is proprietary and non-upgradable, I don't like their walled garden and extortionate app fees, but I don't let that cloud my understanding of what they do well (and poorly). It's only by watching where they succeed and fail that we can learn from it.
 
  • Like
Reactions: TJ Hooker

bit_user

Polypheme
Ambassador
BTW, what's ironic about this situation is that Apple got into trouble (again) by single-sourcing its CPUs. As repeatedly mentioned in this thread, their 2019-generation Mac Pro is stuck on <= 28-core CPUs and PCIe 3.0, likely owing to Intel's problems moving past 14 nm. If you remember before that, they had issues with IBM not advancing PowerPC fast enough. I don't recall why they switched from Motorola, but maybe similar reasons?

The ironic part is that their efforts to replace Intel with their own internal technology is also hitting snags. Maybe, if they'd tried to work with some ecosystem partners (e.g. Ampere Computing), they could've had a smoother transition. A hybrid model, so to speak, instead of trying to go it entirely alone.

Given how much proprietary stuff they typically have, I wonder if they have a strong Not-Invented-Here syndrome. Perhaps an overhang from Steve Jobs?
 
Last edited:
  • Like
Reactions: TJ Hooker
BTW, what's ironic about this situation is that Apple got into trouble (again) by single-sourcing its CPUs. As repeatedly mentioned in this thread, their 2019-generation Mac Pro is stuck on <= 28-core CPUs and PCIe 3.0, likely owing to Intel's problems moving past 14 nm. If you remember before that, they had issues with IBM not advancing PowerPC fast enough. I don't recall why they switched from Motorola, but maybe similar reasons?

The ironic part is that their efforts to replace Intel with their own internal technology is also hitting snags. Maybe, if they'd tried to work with some ecosystem partners (e.g. Ampere Computing), they could've had a smoother transition. A hybrid model, so to speak, instead of trying to go it entirely alone.

Given how much proprietary stuff they typically have, I wonder if they have a strong Not-Invented-Here syndrome. Perhaps an overhang from Steve Jobs?
It depends what they want. The Studio already spanks the pro in some workloads so do they really NEED a Mac Pro atm. PCIE 3.0 is still 16GBps too it’s not exactly slow.
 

bit_user

Polypheme
Ambassador
It depends what they want. The Studio already spanks the pro in some workloads so do they really NEED a Mac Pro atm. PCIE 3.0 is still 16GBps too it’s not exactly slow.
I think the priority is to finish the transition from x86. They & their community of software developers, would like not to have to support their OS & apps on both architectures. As long as there's no new Mac Pro, they can't get rid of the x86 versions.

Plus, there's no denying that Cascade Lake (the <= 28-core, 14 nm CPUs it uses) has been superseded, and many won't upgrade until they move past it.
 
I think the priority is to finish the transition from x86. They & their community of software developers, would like not to have to support their OS & apps on both architectures. As long as there's no new Mac Pro, they can't get rid of the x86 versions.

Plus, there's no denying that Cascade Lake (the <= 28-core, 14 nm CPUs it uses) has been superseded, and many won't upgrade until they move past it.
Or they can move to the max studio, a Mac Pro either isn’t happening or is miles away at this point. They have to leave the current Mac Pro there because of ad in cards etc etc. The workloads that need the CPU power can transition to the studio the ones that require expandability can stick with the pro. There’s not really any overlap between those two.
 

TRENDING THREADS