Review Mac Studio Review: M2 Ultra Powers a Small Workstation Wonder

Genuinely curious as to how warranties work.

Normal critical-infrastructure workstation throws an SSD or PSU for example, you replace it same-day, restore backup if required and keep on truckin. As this is aimed at professionals but has zero replaceable parts, is there are set turnaround time in the price?
 
  • Like
Reactions: artk2219

Giroro

Splendid
The fact you can't upgrade/repair/replace storage brings this from "sorta interesting little idea for a workstation" all the way down to "almost completely useless eWaste".
I mean, I'm sure it would be a perfectly fine office/email/Facebook computer for your grandma, but this is a workstation in the sense that an iPad Pro is for "professionals". Even an unmonetized YouTube channel needs "as much space for storage expansion as you can possibly afford to put into a box". 3D modeling needs storage options, coders need storage options, sciency data and AI workloads all need options for fast easily-replaceable storage. Enterprises/businesses need to be able to swap drives like candy.
Is this intended for, like maybe, non-critical prosumer music or podcast production? Because you can do that with literally any computer, Apple even markets that as a use-case for the iPad.

It's yet another consumer grade fashion statement. It's probably great for content consumption, but not creation.

Why do I rail on storage? Because Apple's storage options for the Mac Pro Studio are some of the most comically overpriced SSDs in the history of man. An extra 1TB of storage over the base model costs $400

Base Price (lowest M2 Ultra + 1TB)$4000
Upgrade to 2TB+$400
4TB+$1000
8 TB+$2200 !!
More than 8TBDoesn't exist
 
Last edited:
The fact you can't upgrade/repair/replace storage brings this from "sorta interesting little idea for a workstation" all the way down to "almost completely useless eWaste".
I mean, I'm sure it would be a perfectly fine office/email/Facebook computer for your grandma, but this is a workstation in the sense that an iPad Pro is for "professionals". Even an unmonetized YouTube channel needs "as much space for storage expansion as you can possibly afford to put into a box". 3D modeling needs storage options, coders need storage options, sciency data and AI workloads all need options for fast easily-replaceable storage. Enterprises/businesses need to be able to swap drives like candy.
Is this intended for, like maybe, non-critical prosumer music or podcast production? Because you can do that with literally any computer, Apple even markets that as a use-case for the iPad.

It's yet another consumer grade fashion statement. It's probably great for content consumption, but not creation.

Why do I rail on storage? Because Apple's storage options for the Mac Pro Studio are some of the most comically overpriced SSDs in the history of man. An extra 1TB of storage over the base model costs $400

Base Price (lowest M2 Ultra + 1TB)$4000
Upgrade to 2TB+$400
4TB+$1000
8 TB+$2200 !!
More than 8TBDoesn't exist
While I agree, I imagine if youre using this as a workstation youre using network based storage for your projects. So you typically would need just enough storage locally for any assets that either cant or would be a pain to move. This doesnt fix the issue but its just a piece of the larger whole. That said, there is absolutely no reason that this should not be upgradeable or serviceable, that comes from pure Apple stubbornness, lack of trust in their users, and greed. If you want to say its because all of the parts are custom, then make it a standard and release documentation so that others can follow it, and shut your mouth.
 
  • Like
Reactions: jnjnilson6
While I agree, I imagine if youre using this as a workstation youre using network based storage for your projects. So you typically would need just enough storage locally for any assets that either cant or would be a pain to move. This doesnt fix the issue but its just a piece of the larger whole. That said, there is absolutely no reason that this should not be upgradeable or serviceable, that comes from pure Apple stubbornness, lack of trust in their users, and greed. If you want to say its because all of the parts are custom, then make it a standard and release documentation so that others can follow it, and shut your mouth.

I was thinking the same thing, you would use a NAS anyways, with a large SSD cache. That's probably why they put 10Gbe on this as well. Which I would have thought would be 2.5be in this price range. 10Gbe would give over 1GB/s of data transfer speed which is plenty fast for most people's needs.
 
  • Like
Reactions: artk2219

newtechldtech

Notable
Sep 21, 2022
311
115
860
The fact you can't upgrade/repair/replace storage brings this from "sorta interesting little idea for a workstation" all the way down to "almost completely useless eWaste".
I mean, I'm sure it would be a perfectly fine office/email/Facebook computer for your grandma, but this is a workstation in the sense that an iPad Pro is for "professionals". Even an unmonetized YouTube channel needs "as much space for storage expansion as you can possibly afford to put into a box". 3D modeling needs storage options, coders need storage options, sciency data and AI workloads all need options for fast easily-replaceable storage. Enterprises/businesses need to be able to swap drives like candy.
Is this intended for, like maybe, non-critical prosumer music or podcast production? Because you can do that with literally any computer, Apple even markets that as a use-case for the iPad.

It's yet another consumer grade fashion statement. It's probably great for content consumption, but not creation.

Why do I rail on storage? Because Apple's storage options for the Mac Pro Studio are some of the most comically overpriced SSDs in the history of man. An extra 1TB of storage over the base model costs $400

Base Price (lowest M2 Ultra + 1TB)$4000
Upgrade to 2TB+$400
4TB+$1000
8 TB+$2200 !!
More than 8TBDoesn't exist

SSD upgrade is not a big deal for MACs , use external empty TB4/3 boxes for best SSD performance ... or simply nvme USB-C boxes for acceptable performance.
 
  • Like
Reactions: artk2219

Carl Bicknell

Distinguished
Mar 21, 2013
1
2
18,510
Geekbench overstates Apple's performance. Once you take that out and look at Cinebench (Blender doesn't scale too well) - the M2 Ultra is miles behind AMD & Intel.

I'd be very interested to see other well scaled benchmarks, like Stockfish.
 

bit_user

Titan
Ambassador
@AndrewFreedman , thanks for the review. I wish you could've run 3D Mark's Wildlife Unlimited benchmark, as it promises Windows and MacOS support!

Being a true graphics benchmark, it should be a much better comparison of GPU performance than the Geekbench GPU Compute test!

Also, please test Blender GPU performance on one of the PCs, with a Nvidia GPU.

the storage module is technically replaceable, but these aren't the same as regular store-bought SSDs
🙄

Power Supply 370W
Wow, that's an upgrade from the previous gen!

Geekbench 6.1.0 CPU Tests
Wow, look at how the 32-core ThreadRipper decisively beats a 56-core Xeon W! I think that really casts doubt on how well Geekbench 6.1.0 scales. That, in turn, calls into question the entire metric, as a proxy for scalable workloads. Games, perhaps, but I think not more scalable workstation & server processing.

The Mac Studio's 4TB SSD blazed along on our file transfer test, transferring 25GB of files at a rate of 2,440.7 MBps.
You really ought to say how many files this involved, and whether it was an in-place copy (which I assume). If a relatively small number of large files, it doesn't seem terribly impressive. It'd be great to have stats from a couple other SSDs, for comparison.

We put the M2 Ultra through its paces in Blender for 3D modelling and visual effects, using both the CPU and GPU test. The result here? Use the M2 Ultra's powerful GPU, which was far faster every time
OMG, couldn't you find a decent, recent Nvidia GPU to use for comparison?? They have first-class Blender support!

Finally, the look just doesn't work, for me. I think it looks like an oversized NUC. To me, something NUC-looking should be NUC-sized.
 
Last edited:
I was thinking the same thing, you would use a NAS anyways, with a large SSD cache. That's probably why they put 10Gbe on this as well. Which I would have thought would be 2.5be in this price range. 10Gbe would give over 1GB/s of data transfer speed which is plenty fast for most people's needs.
That still doesn't answer the question on downtime. If a 'normal' critical-use workstation throws a PSU or SSD, you can replace it and be back up and running in no time. Doesn't matter if the data is on a NAS, until it's fixed we are a workstation down - hence why I'm asking about warranted turnaround times on broken components. If I have to wait 3 weeks for an Apple repair, I need to have multiple unused spare workstations in stock on the shelves as replacements - which then starts to get massively expensive and wasteful as I spend several thousand dollars on spare kit that might never get used.
 
Workstation... 4TB of non-upgradable disk. EDIT: I just noticed it is, but with a huge asterisk. Doesn't change my mind.

I can't help but raise an eyebrow on that one... Isn't that like, super deal breaking for a WORKSTATION?

Sure, 10Gbit NIC, but have fun moving TBs of data through it.

I do not share that 4.5 star review given the obvious (to me at least) shortcomings of this product and the segment it targets, but oh welp. It looks interesting though, but I am not sure if I'd want this as a workstation unit myself.

Regards.
 
Last edited:
That still doesn't answer the question on downtime. If a 'normal' critical-use workstation throws a PSU or SSD, you can replace it and be back up and running in no time. Doesn't matter if the data is on a NAS, until it's fixed we are a workstation down - hence why I'm asking about warranted turnaround times on broken components. If I have to wait 3 weeks for an Apple repair, I need to have multiple unused spare workstations in stock on the shelves as replacements - which then starts to get massively expensive and wasteful as I spend several thousand dollars on spare kit that might never get used.

Apple typically doesn't fix things like that. They try to replace whole modules when they can, but most of the time if they can't fix it quickly, they'll just swap it for a refurbished unit to get you up and running right away. Hopefully they have a way to transfer the SSD data to the replacement unit.
 
  • Like
Reactions: artk2219
Apple typically doesn't fix things like that. They try to replace whole modules when they can, but most of the time if they can't fix it quickly, they'll just swap it for a refurbished unit to get you up and running right away. Hopefully they have a way to transfer the SSD data to the replacement unit.
I think in my use case it's a hard pass. I'd look to have minutes / 1-2 hours of downtime without leaving the building on critical use kit, or at worst a (potentially very expensive) callout contract under a timed SLA, similar to a bunch of our Cisco kit has where we get a replacement in <2 hours 24/7.

If there's no servicable parts inside and the only repair is to take it somewhere - it's just not fit for purpose. Given the price and the spec of this system, I'm not sure a use-case really exists where it is the compelling choice and I can't fathom the positive review.
 
I think in my use case it's a hard pass. I'd look to have minutes / 1-2 hours of downtime without leaving the building on critical use kit, or at worst a (potentially very expensive) callout contract under a timed SLA, similar to a bunch of our Cisco kit has where we get a replacement in <2 hours 24/7.

If there's no servicable parts inside and the only repair is to take it somewhere - it's just not fit for purpose. Given the price and the spec of this system, I'm not sure a use-case really exists where it is the compelling choice and I can't fathom the positive review.

I fully understand, but I think Apple targets more of the designer/artist type customers. It's not the end of the world if their computers take a day to get sorted. We have HP computers where I work and it can take 1-3 days for a technician to come out with a replacement part to fix a workstation.
 

andrep74

Distinguished
Apr 30, 2008
11
13
18,515
This article fails to compare a workstation with a decent graphics card, and ignores a better-spec possibility for less money. People spending this much could easily get a contemporary workstation with 512GB of RAM and 100 Gbe. The real appeal is that the power consumption is insanely low, countering other manufacturers' acquiescence to mining demand. Nevertheless, I'd hate for the trend to be set by people with a desire for expensive glass cannons sitting on their desks.
 

bit_user

Titan
Ambassador
That still doesn't answer the question on downtime. If a 'normal' critical-use workstation throws a PSU or SSD, you can replace it and be back up and running in no time.
Apple will sell you AppleCare, which includes On Site service. Or, you can take it to an Apple Store or other service location to have it fixed.


You raise a good point about reliability, which is that many workstation users value data integrity and therefore opt to use ECC memory. I haven't heard whether any of the M-series SoCs use proper ECC memory (i.e. not just the on-die ECC, built into DDR5).

I think in my use case it's a hard pass. I'd look to have minutes / 1-2 hours of downtime without leaving the building on critical use kit, or at worst a (potentially very expensive) callout contract under a timed SLA, similar to a bunch of our Cisco kit has where we get a replacement in <2 hours 24/7.
If you're that sensitive to downtime, then I'd agree it might not be right for you. But, another option would be to have a spare unit on-site.
 
  • Like
Reactions: artk2219

abufrejoval

Reputable
Jun 19, 2020
441
301
5,060
The fact you can't upgrade/repair/replace storage brings this from "sorta interesting little idea for a workstation" all the way down to "almost completely useless eWaste".
I mean, I'm sure it would be a perfectly fine office/email/Facebook computer for your grandma, but this is a workstation in the sense that an iPad Pro is for "professionals". Even an unmonetized YouTube channel needs "as much space for storage expansion as you can possibly afford to put into a box". 3D modeling needs storage options, coders need storage options, sciency data and AI workloads all need options for fast easily-replaceable storage. Enterprises/businesses need to be able to swap drives like candy.
Is this intended for, like maybe, non-critical prosumer music or podcast production? Because you can do that with literally any computer, Apple even markets that as a use-case for the iPad.

It's yet another consumer grade fashion statement. It's probably great for content consumption, but not creation.

Why do I rail on storage? Because Apple's storage options for the Mac Pro Studio are some of the most comically overpriced SSDs in the history of man. An extra 1TB of storage over the base model costs $400
I'd like Apple to be permanenty sued for usury until they sell storage at market prices, but of course Apple isn't selling storage, they are selling something much more innovative and distinct...

In the case of a notebook, storage that dies with the notebook SoC simply isn't acceptable, because you might have done significant work offline.

But in the case of a workstation, rule #1 applies: never work without a backup.

And in fact I keep my workstations pretty near stateless, mostly because I move around between them so much. It might be really old fashioned, but I do pretty much everything I do off a file share. And that file share is protected via RAID, nearly real-time backups and then offline backups, because in the old days storage tended to fail when you least could afford it. And those painful lessions forged my habits.

My internal network has been 10Gbit for many years to support that and actually that no longer fits to NVMe, but I'd rather wait a few seconds now and then but feel reassured that most likely my data will be safe.

My last Apple was a ][ and perhaps one of the most positive effects of Apple's price policies is that I never had my employer force MacOS on me, because I'd probably rather want to refuse and resign than use.

But I sure wouldn't mind using this workstation's hardware with a few slight modifications: NVMe storage in proper M.2 slots for one and the ability to run any ARM capable OS I'd want.

With the Mx architecture the lack of expandability of RAM has an acceptable technical base for the first time and 192GB is pretty close to the 128GB I put into my workstations, too.

Except that I currenty would have to pay not much more than €$200 for 128GB with ECC on x86.

Apple's solution for offering near GPU bandwidth with lots of cheap DRAM channels on the die carrier is very smart and much more price efficient than HBM or VRAM. But beyond an IP bonus, those price efficiencies should be passed on to the consumers in a proper market, which Apple avoids with measure that are not compatible with its rules. And that behaviour needs proper "reward".

With TB ports, local storage expansion becomes a bit mote on a workstation. Sure, it won't look that great with all these drives sticking out front and back, but I keep my workstations below my desk and could not care less about their looks. I'm much more interested optimizing heat and noise per power of compute and that's where the chips would be interesting, even if the machine is not.
 

abufrejoval

Reputable
Jun 19, 2020
441
301
5,060
That still doesn't answer the question on downtime. If a 'normal' critical-use workstation throws a PSU or SSD, you can replace it and be back up and running in no time. Doesn't matter if the data is on a NAS, until it's fixed we are a workstation down - hence why I'm asking about warranted turnaround times on broken components. If I have to wait 3 weeks for an Apple repair, I need to have multiple unused spare workstations in stock on the shelves as replacements - which then starts to get massively expensive and wasteful as I spend several thousand dollars on spare kit that might never get used.
The first "workstations" I used were SPARCstations, diskless and noiseless Suns SLC that only held a CPU and RAM inside a monitor's backside, booted and operated off the network. I don't know if that was even 100Mbit/s or still Thick-Ethernet at 10Mbit/s, I believe CPU clocks were around 33-40 MHz but at least they were true 32-bit machines and even paged over Ethernet. You could swap the keyboard and the mouse, but there were no user manageable parts on these things otherwise.
sun_elc.gif


It gave you the local compute power you needed for a (monochrome) graphical user interface and a (for the time) super responsive system, yet it also returned the power of the mainframe terminals, where it didn't matter which one you used, because all your code and data was in one central space. It also gave you a huge number of these relatively small yet powerful systems, as well as some faster and bigger but 100% compatible SPARCservers to hold your data, which you could use from your own workstation simply by adding an "on <system>" clause to any command or part of a pipe.

If your single workstation was too slow for a serial set of compiles or computues, you just had to invest a bit of brain power to field the work out to all the others and it became a bit of a game of who could use all disposable compute at once...

That completely stateless mode of operation with the perfect local responsiveness of local compute and "cluster/cloud" expandability has forged some significant paths in my brain and to me still represents an ideal environment.

I've always faulted Microsoft for tying their OS to a disk for copy protection and crippling network boot and operations as a result. Even more importantly I believe it held back the evolution of networks, which have lagged far behind the bandwidths available for storage, even if that was also distributed as in the case of SANs.

So for me a real workstation is stateless and thus ready to be swapped as a whole, without the user losing much if anything in the mean-time, least of all time or productivity. Now if that workstation can be fixed from spare parts locally afterwards or shipped whole is logistics details, which are likely to vary on where and to whom you deploy workstations and their range of capabilities.

If we used Thunderbolt/PCIe based networks instead of Ethernet, this stateless mode of operation would be much more natural. 10Gbit networks are much better than all their ancestors, but no match for NVMe class bandwidth and PCIe class latencies.
 
  • Like
Reactions: artk2219
The first "workstations" I used were SPARCstations, diskless and noiseless Suns SLC that only held a CPU and RAM inside a monitor's backside, booted and operated off the network. I don't know if that was even 100Mbit/s or still Thick-Ethernet at 10Mbit/s, I believe CPU clocks were around 33-40 MHz but at least they were true 32-bit machines and even paged over Ethernet. You could swap the keyboard and the mouse, but there were no user manageable parts on these things otherwise.
sun_elc.gif


It gave you the local compute power you needed for a (monochrome) graphical user interface and a (for the time) super responsive system, yet it also returned the power of the mainframe terminals, where it didn't matter which one you used, because all your code and data was in one central space. It also gave you a huge number of these relatively small yet powerful systems, as well as some faster and bigger but 100% compatible SPARCservers to hold your data, which you could use from your own workstation simply by adding an "on <system>" clause to any command or part of a pipe.

If your single workstation was too slow for a serial set of compiles or computues, you just had to invest a bit of brain power to field the work out to all the others and it became a bit of a game of who could use all disposable compute at once...

That completely stateless mode of operation with the perfect local responsiveness of local compute and "cluster/cloud" expandability has forged some significant paths in my brain and to me still represents an ideal environment.

I've always faulted Microsoft for tying their OS to a disk for copy protection and crippling network boot and operations as a result. Even more importantly I believe it held back the evolution of networks, which have lagged far behind the bandwidths available for storage, even if that was also distributed as in the case of SANs.

So for me a real workstation is stateless and thus ready to be swapped as a whole, without the user losing much if anything in the mean-time, least of all time or productivity. Now if that workstation can be fixed from spare parts locally afterwards or shipped whole is logistics details, which are likely to vary on where and to whom you deploy workstations and their range of capabilities.

If we used Thunderbolt/PCIe based networks instead of Ethernet, this stateless mode of operation would be much more natural. 10Gbit networks are much better than all their ancestors, but no match for NVMe class bandwidth and PCIe class latencies.
I'm old enough to have worked on Sparcs in Ericsson's R&D centre in Ireland from memory. That state of working is replicated now in major services such as Microsoft Dynamics being web-based so can be accessed from anything. You don't buy kit like this Apple to run a browser only. So I'm still not sure how a $4k workstation that cannot be repaired onsite has any place in a business. You don't spend that much on a user unless it's critical use and then if it *is* critical use, you need to cover downtime - which multiplies the cost. Maybe if the office is only open 9-5 and next to an Apple Store or something... Any business that has to consider TCO which includes downtime and maintenance - I don't see how you would justify it over almost anything else.
 
  • Like
Reactions: artk2219

bit_user

Titan
Ambassador
So I'm still not sure how a $4k workstation that cannot be repaired onsite has any place in a business.
My employer uses Dell workstations and I think we don't repair them onsite. At least sometimes, we ship them back to Dell. Some models that people can get easily cost $4k or more (or would do, if we didn't lease them), and it's just a decision for your manager about whether they think your job justifies having such a machine.

You don't spend that much on a user unless it's critical use
I can say it seems a bit ridiculous to quibble over $4k, depreciating over 3 years, for an employee who could easily cost 100x that much (factoring in pay + benefits + facilities, etc.) in that time. Way back when $100k went a lot further, companies would easily spend that kind of money on EDA and CAD workstations.

if it *is* critical use, you need to cover downtime - which multiplies the cost. Maybe if the office is only open 9-5 and next to an Apple Store or something...
My team doesn't keep hot spares on site. If someone's machine breaks, they'd setup and make due with an old one while the broken one is getting replaced.
 
  • Like
Reactions: artk2219

abufrejoval

Reputable
Jun 19, 2020
441
301
5,060
I'm old enough to have worked on Sparcs in Ericsson's R&D centre in Ireland from memory. That state of working is replicated now in major services such as Microsoft Dynamics being web-based so can be accessed from anything. You don't buy kit like this Apple to run a browser only. So I'm still not sure how a $4k workstation that cannot be repaired onsite has any place in a business. You don't spend that much on a user unless it's critical use and then if it *is* critical use, you need to cover downtime - which multiplies the cost. Maybe if the office is only open 9-5 and next to an Apple Store or something... Any business that has to consider TCO which includes downtime and maintenance - I don't see how you would justify it over almost anything else.
To me a Web-UI is the modern equivalent of a block-mode terminal: you receive and submit a form with all sorts of attributes and apart from some static format and parameter checking there is no local computation: very flat very light, a Chromebook might be overdoing it.

The stateless workstation is very different: in a way it's a block-mode terminal on steroids, because the interfaces to the host is still well defined and fixed in a way. Except that it's executing much more logic locally, including perhaps some extremely sophisticated rendering or machine learning interaction with the user.

That user could be creating a 3D masterpiece, a bot so smart and well-behaved Downton Abbey's Carson would get jalous or just a video that blows your mind. But it can be persisted on storage in an abstraction that another workstation can read and run without missing more than one or two beats.

If a work station use case scales sufficiently, someone will invest the effort in making the job it performs stateless or rather persist externally to the machine it ran on. If not, perhaps you'll need to buy a different type of workstation.
 
  • Like
Reactions: artk2219

bit_user

Titan
Ambassador
To me a Web-UI is the modern equivalent of a block-mode terminal: you receive and submit a form with all sorts of attributes and apart from some static format and parameter checking there is no local computation: very flat very light, a Chromebook might be overdoing it.
The web has changed a lot, in the past 25 years, even if your knowledge of it hasn't. Two recent examples demonstrating just how far browsers have come, as a client-side computing platform:

Those are just the most extreme examples. Web browsers also support APIs for things like video decoding, which I dare say is a lot more local computation than "static formatting and parameter checking".

If a work station use case scales sufficiently, someone will invest the effort in making the job it performs stateless or rather persist externally to the machine it ran on. If not, perhaps you'll need to buy a different type of workstation.
Here's a 3rd option: cloud-hosted desktops.
 
  • Like
Reactions: artk2219
Jul 6, 2023
1
2
15
This review unfortunately makes the Mac Studio look more reasonably priced than it really is, by limiting its comparisons to only two of the most expensive CPUs made by AMD and Intel. This allowed the author to conclude that as much or even more money would need to be spent building an AMD or Intel system with equal performance - which is completely false.

The truth is that for a little over $1,000, you can build a PC using for example the AMD 7950x that will out-perform the Apple M2 Ultra in Cinebench R23 or other CPU benchmarks. As for GPU performance in Blender, even first-gen Nvidia RTX cards have no problem equaling or beating the Apple M2 Max; and a modern 4080 will beat the M2 Ultra. So for around $2,000 total you can easily build a PC that will out-perform Apple's $4,000 - $6,000 Mac Studio configurations.
 

TRENDING THREADS