News Don't Believe the Hype: Apple's M2 GPU is No Game Changer

Apple are the Kings of Marketing. They could sell you a 1987 Yugo and convince you that it's really a 2022 Acura NSX. I raised an eyebrow when the Capcom Rep went off on how the M2 running Resident Evil Village is right up there with both the PS5 and XBOX Series X.

As far are ARM mobile design goes, Apple is King. That's not hype, they legit have the most superior ARM designs. However, their marketing people are just as evil as Intel's. Trying to steal away x86 marketshare by dirty tactics.

They should stick to the M1 and M2 merits that are legit good, such as amazing battery life in a laptop, especially when doing content creation compared to x86 counterparts.
 
Jun 7, 2022
4
0
10
Not every chip is meant to be a game changer. Apple always keeps their line up to date. These machines will work wonderfully for those whose needs they fit. Comparing graphics performance of M2 to dedicated graphics cards is completely unfair and unnecessary and just a tactic to stir up industry standard Apple criticism. The fact is these chips have put a lot of power into the hands of hobbyists/amateurs/students who are exploring new skills who may upgrade as they go deeper. That's who these chips are for, the average consumer, that's why they are going into the lowest level laptops Apple has available. Also note that at the top of the article it says that they won't be comparing the M2 to the M1Max or Pro and later on they do just that. I use excellent PCs at work that blow my Mac out of the water. I still like my Mac for what I use it for. You wouldn't compare a crap PC to an excellent PC. The difference is Apple doesn't make crappy Macs. Apple continues to make excellent, reliable, long term machines for consumers and pros alike. The M2 continues to keep these machines updated. Also keep in mind the niche that Apple Silicon is carving out for itself is the highest processing power at the lowest possible power consumption, and they're doing it well.
 
Not every chip is meant to be a game changer. Apple always keeps their line up to date. These machines will work wonderfully for those whose needs they fit. Comparing graphics performance of M2 to dedicated graphics cards is completely unfair and unnecessary and just a tactic to stir up industry standard Apple criticism. The fact is these chips have put a lot of power into the hands of hobbyists/amateurs/students who are exploring new skills who may upgrade as they go deeper. That's who these chips are for, the average consumer, that's why they are going into the lowest level laptops Apple has available. Also note that at the top of the article it says that they won't be comparing the M2 to the M1Max or Pro and later on they do just that. I use excellent PCs at work that blow my Mac out of the water. I still like my Mac for what I use it for. You wouldn't compare a crap PC to an excellent PC. The difference is Apple doesn't make crappy Macs. Apple continues to make excellent, reliable, long term machines for consumers and pros alike. The M2 continues to keep these machines updated. Also keep in mind the niche that Apple Silicon is carving out for itself is the highest processing power at the lowest possible power consumption, and they're doing it well.
I never said I wouldn't compare the M2 with M1 Pro/Max/Ultra, just that Apple didn't make any such comparisons. For obvious reasons, since the M1 Pro/Max are substantially more potent than the base M1, though we can expect Apple will eventually create higher tier M2 products as well. I also compared the M2 GPU to AMD's Ryzen 7 6800U, which is also a 15W chip. But you're probably not here to reason with. LOL
 
From what slides that Apple has been pushing out, along with other people's take on it, I think Apple isn't so much focused on absolute performance. They're focused on higher efficiency. Sure an M2 cannot get the same performance as whatever 12 10-core Alder Lake Intel part they used, but they claim for some relative performance metric (it looks like they capped it out at 120), it uses 25% the power of the part they're comparing to. And the chart shows the part they're comparing to can get another 20 points, but it adds another ~60% power. Granted while we won't really know what tests they've ran, I've seen similar behavior on my computer simply by not allowing the parts to boost as fast as they can.

Although I feel like that's what they were pushing with the M1, and they just happened to figure out how to fudge the numbers to pretend they also have some absolute performance crown.

EDIT: I'm trying to find some review that did a power consumption test with the Samsung Galaxy Book 2 360 laptop they used. While Intel's spec sheet says the i7-1255U's maximum turbo boost is 55W, it certainly won't stay there. Unfortunately I can't find anything.
 
Last edited:

aalkjsdflkj

Honorable
Jun 30, 2018
45
33
10,560
They should stick to the M1 and M2 merits that are legit good, such as amazing battery life in a laptop, especially when doing content creation compared to x86 counterparts.

Excellent point. Each of the new Mac SoCs are great for the market they're targeting. They legitimately do some things, particularly efficiency, better than anyone else right now. I don't understand why they try to deceive consumers with ridiculous claims like claiming the M1 ultra was faster than a 3090. While that was arguably the most egregious false or misleading claim last time around, there are plenty of others. I haven't even bothered to look at the M2 claims because I know people like the TH folks will provide comprehensive reviews.
 
Excellent point. Each of the new Mac SoCs are great for the market they're targeting. They legitimately do some things, particularly efficiency, better than anyone else right now. I don't understand why they try to deceive consumers with ridiculous claims like claiming the M1 ultra was faster than a 3090. While that was arguably the most egregious false or misleading claim last time around, there are plenty of others. I haven't even bothered to look at the M2 claims because I know people like the TH folks will provide comprehensive reviews.

Yes, efficiency under heavy load is where the ARM chips create the biggest performance gap compared to their x86 rivals.

I wonder how long the M2 will last while gaming on battery power? Tomshardware and other reviewers should test that scenario. People commute on trains and planes all the time, it would be nice to see if it'll last their entire commute. My laptops are lucky to last an hour while gaming with battery power, and they're heavily throttled.
 
You wouldn't compare a crap PC to an excellent PC. The difference is Apple doesn't make crappy Macs.
....prior to M1 pretty much all were crappy.

over priced for what performance you got (especially their trash can & cheese grater)
&
thermal throttled all time (cause they refuse to let ppl control fan speed properly ala the apple "my way or the highway" mentality.)

and yes we do compare crappy pc's to good pc's. (so you know whats better price to performance gain/loss or their improvements over last gen)


Apple arguably does "performance graphs" worse than Intel. (who has a habit of making bad graphs) as they refuse to say what they are compared to. "random [insert core count # here] core alternative" gives u nothing as it could be anything, running any configuration, possibly stock stats, etc etc.)
 
I wonder how long the M2 will last while gaming on battery power? Tomshardware and other reviewers should test that scenario. People commute on trains and planes all the time, it would be nice to see if it'll last their entire commute. My laptops are lucky to last an hour while gaming with battery power, and they're heavily throttled.
While it's not a gaming load, I ran across a YouTube video that did a review of the M1 MacBook Pro after 7 months and mentioned that they were able to edit (presumably 4K) videos for 5 hours compared to about 1-1.5 on the 2019 Intel MacBook Pro.

"random [insert core count # here] core alternative" gives u nothing as it could be anything, running any configuration, possibly stock stats, etc etc.)
In the press release, there's a few footnotes:
  1. Testing conducted by Apple in May 2022 using preproduction 13-inch MacBook Pro systems with Apple M2, 8-core CPU, 10-core GPU, and 16GB of RAM; and production 13-inch MacBook Pro systems with Apple M1, 8-core CPU, 8-core GPU, and 16GB of RAM. Performance measured using select industry‑standard benchmarks. Performance tests are conducted using specific computer systems and reflect the approximate performance of MacBook Pro.
  2. Testing conducted by Apple in May 2022 using preproduction 13-inch MacBook Pro systems with Apple M2, 8-core CPU, 10-core GPU, and 16GB of RAM. Performance measured using select industry‑standard benchmarks. 10-core PC laptop chip performance data from testing Samsung Galaxy Book2 360 (NP730QED-KA1US) with Core i7-1255U and 16GB of RAM. Performance tests are conducted using specific computer systems and reflect the approximate performance of MacBook Pro.
  3. Testing conducted by Apple in May 2022 using preproduction 13-inch MacBook Pro systems with Apple M2, 8-core CPU, 10-core GPU, and 16GB of RAM. Performance measured using select industry‑standard benchmarks. 12-core PC laptop chip performance data from testing MSI Prestige 14Evo (A12M-011) with Core i7-1260P and 16GB of RAM. Performance tests are conducted using specific computer systems and reflect the approximate performance of MacBook Pro.
The problem is they don't tell us what the "industry-standard benchmarks" are.
 
  • Like
Reactions: JarredWaltonGPU

JamesJones44

Reputable
Jan 22, 2021
662
593
5,760
Yes, efficiency under heavy load is where the ARM chips create the biggest performance gap compared to their x86 rivals.

I wonder how long the M2 will last while gaming on battery power? Tomshardware and other reviewers should test that scenario. People commute on trains and planes all the time, it would be nice to see if it'll last their entire commute. My laptops are lucky to last an hour while gaming with battery power, and they're heavily throttled.

The M1 Max with 3x the GPU cores maxes out at 76 watts without High Power Mode (only available on MacBook M1 Max and Ultra at the moment). The power adapter is only rated at 35 watts with the 10 GPU core version, so I wouldn't think it would exceed that. Theoretically if everything ran at 100% for 2 hours your battery would be gone, but that's probably not realistic. Back of napkin math probably puts you in the 4 to 6 hour range given the 52.6 Wh battery with 30% load for the entire SoC.
 

magbarn

Reputable
Dec 9, 2020
121
93
4,670
The M1 Max with 3x the GPU cores maxes out at 76 watts without High Power Mode (only available on MacBook M1 Max and Ultra at the moment). The power adapter is only rated at 35 watts with the 10 GPU core version, so I wouldn't think it would exceed that. Theoretically if everything ran at 100% for 2 hours your battery would be gone, but that's probably not realistic. Back of napkin math probably puts you in the 4 to 6 hour range given the 52.6 Wh battery with 30% load for the entire SoC.
Apple has been supplying underpowered chargers for years. Their designs can pull more than the power brick can put out so they rely on battery power to make the difference. You can't base the TDP of the M2 based on the power brick alone.
 

JamesJones44

Reputable
Jan 22, 2021
662
593
5,760
Apple has been supplying underpowered chargers for years. Their designs can pull more than the power brick can put out so they rely on battery power to make the difference. You can't base the TDP of the M2 based on the power brick alone.

No one is talking about TDP which is heat dissipation, not power consumption. Also, I've seen no evidence of them shipping under powered adapters, there are several tests of recent releases of Macs that all use less power than their Adapters. The M1 Max uses a max of 110 watts with High Power Mode enabled and it ships with a 140 watt adapter. The M1 Mac Air maxed out at 25 watts and shipped with a 30 watt adapter. You can find all of this at AnAndTech if you feel the need to verify.
 
Apr 1, 2020
1,447
1,100
7,060
Apple are the kings of "If we make it, people will buy it". Remember:

apple.jpeg


But still, if it is able to deliver 35% more performance, that's a huge generational improvement that even AMD has ever matched (that I can think of without looking it up), the only thing that comes close is the leap from Piledriver to Zen 1 which was a brand new architecture.

And that's something AMD and nVidia need to take note of so as to not become the next Samsung.
 

watzupken

Reputable
Mar 16, 2020
1,024
517
6,070
To me, M1 was the game changer as it was the first good ARM based chip that people wasn’t expecting Apple to pull off. I wasn’t expecting M2 to deliver that kind of shock to the industry since it is clear that it is just an upgraded version of M1. So I don’t know what kind of ”game changer” are we talking about/ looking for here. Using the node as a gauge of what is a ”game changer“ is not accurate when the writer said that he wasn’t expecting much since its from N5 to N5P. Intel is pretty much stuck in the same situation, yet we can observe marked improvement moving from Tiger Lake to Alder Lake due to architecture changes. I think most can’t tell since Apple is always secretive about such things. Having said that, the performance bump mentioned sounds fairly healthy to me. Of course this improvements need to be independently validated as well.
GPU wise, I really don’t think it matters. If you are getting a Mac based system, gaming should be either the secondary or tertiary reason. Most games don‘t run on Mac OS, and for remaining ones that work, they mostly run due to Rosetta, and very little games run natively. I guess the GPU is mostly to drive productivity work like video editing, etc, and less for gaming. That 2 cores bump in GPU should ideally result in an improvement in software that uses the GPU, even if we assume no changes to the GPU architecture and clockspeed.
Bottom line is the advertised performance improvement sounds reasonable, at least to me. You don’t find “game changing“ products often, though it really depends on what is the definition of “game changing”. As transistors get smaller, we can already observe that it is increasingly slower and harder to shrink it further. These 5nm, 7nm, X nm, are all meaningless measure because the actual transistor size does not match the naming. At least from my observations, 5nm don’t seem like a big improvement over 7nm even though it sounds great on paper. Most if not every time we see performance improvement, it is usually at the expense of higher power draw or due to some changes in the architecture. Like if we look at Intel’ s Tiger Lake and Alder Lake, the P-cores on Alder Lake have been proven to be substantially more power hungry than Tiger Lake if we look at the % increase in power requirement. It performs exceptionally well in multithreaded situation over Tiger Lake only because there exists ”efficient” cores, that bumps the number of physical cores up.
 
  • Like
Reactions: hypemanHD

kaalus

Distinguished
Apr 23, 2008
77
49
18,560
You compare a 15W GPU to a 150W GPU and slam it for only being half as fast.

I'm not an Apple fanboy and only boot into MacOS when I have to compile some iOS code, but it's definitely gamechanging that Apple provides decent performance with no ridiculous fan whine of the so-called "gaming laptops" that I would not touch with a bargepole.
 
Jun 8, 2022
1
0
10
I think the key takeaway is that these are raw performance. On even most optimized Windows PC, no software is optimized because they have to be compatible with a super long list of hardware, boilerplate, abstraction and polymorphism can kill huge performance, especially the OS and the driver itself. On the other hand, MacOS, drivers and many software have the chance to optimized their code for just one type of hardware architecture, which is perfect for ARM type of CPU. I do coding in the cloud and I’ve found that sometimes developers from other platform brings their overkill medals abstraction and polymorphism onto process driven platform can overkill the business applications for up to 500x without knowing it. If I simply optimized it with just process and event driven design, I could reduce the latency or increase performance up to 5-10x. And this is just business applications, imagine the impact on low level language on specialized hardware. So I think if things are optimized on Apple side, performance maybe able to catch up to 80 or even 90% or in some scenarios, go beyond the equivalent on the PC side.
 
Jun 7, 2022
4
0
10
I never said I wouldn't compare the M2 with M1 Pro/Max/Ultra, just that Apple didn't make any such comparisons. For obvious reasons, since the M1 Pro/Max are substantially more potent than the base M1, though we can expect Apple will eventually create higher tier M2 products as well. I also compared the M2 GPU to AMD's Ryzen 7 6800U, which is also a 15W chip. But you're probably not here to reason with. LOL
On the contrary, I'm here for a reasonable discussion. Okay, I see that now, my bad. And I see a little more clearly what the intent in this article was now. Your a graphics guy, and making graphics comparisons is totally fair. But I think your bias is coming out in your article and rhetoric. I could be wrong. My bias was definitely in my comment. My point is that Apple markets these as power efficient chips and it's unfair to make it more than that. If you look at the graphs Apple makes to demonstrate the performance of the M1 and M2 chips, they are designed to highlight the power consumption based on similar performance of other chips. They don't always specify what they are comparing it to or don't always show the full processing power of the compared chip, but that's not the point. They are simply trying to demonstrate power efficiency, not raw power. However, because of the integrated nature of their own SOC combined with the rest of their hardware and software, people are getting real world performance that exceeds expectations.
 

husker

Distinguished
Oct 2, 2009
1,209
221
19,670
You compare a 15W GPU to a 150W GPU and slam it for only being half as fast.

I'm not an Apple fanboy and only boot into MacOS when I have to compile some iOS code, but it's definitely gamechanging that Apple provides decent performance with no ridiculous fan whine of the so-called "gaming laptops" that I would not touch with a bargepole.

I don't think the article "slammed" the chip. In fact, the article had a lot of nice things to say about it. The author didn't choose to compare a 15W GPU to a 150W out of spite, but was just managing expectations for potential buyers. I know there are plenty of tech-savvy Apple users, but there are many more that are not . I think the intent was to counter a lot of the marketing and/or word-of-mouth hype and warn anyone looking for a new gaming laptop to put things in the proper perspective.
 
Jun 7, 2022
4
0
10
....prior to M1 pretty much all were crappy.

over priced for what performance you got (especially their trash can & cheese grater)
&
thermal throttled all time (cause they refuse to let ppl control fan speed properly ala the apple "my way or the highway" mentality.)

and yes we do compare crappy pc's to good pc's. (so you know whats better price to performance gain/loss or their improvements over last gen)


Apple arguably does "performance graphs" worse than Intel. (who has a habit of making bad graphs) as they refuse to say what they are compared to. "random [insert core count # here] core alternative" gives u nothing as it could be anything, running any configuration, possibly stock stats, etc etc.)
These are tired arguments against old machines. I could quote tired mac fanboy responses to these criticisms but what good would it do? Your reasons here for not liking Macs are totally legitimate. The reasons people like Macs are also totally legitimate. Programmers, creatives and average consumers have used Macs for years and loved them, and accomplished great things with them. The same can be said of PC's. For people who like doing things Apple's way, they work great. If you don't, that's fine.

I guess what I meant is that crappy PC's are meant to do a limited amount of things on a tight budget, so you wouldn't criticize it for not being an excellent PC. A Mac has a different "skillset" from a PC so the comparisons here aren't quite 1:1.

As far as graphs, the graphs Apple has been using since M1 aren't performance graphs. At least not entirely. They measure performance and power consumption, and if you were watching the presentation, they consistently highlighted the power efficiency based on similar performance from other chips. Apple silicon is about efficiency not raw power, and I personally thought they did a better job at highlighting that in this presentation than they have in the past. But some people are only interested in raw power so that's what they look for in those graphs but it's just not there. Asking for what they are comparing M2 to is fair, but also not entirely relevant since not many other chips I know of are trying to carve out a niche in power efficiency. I could be wrong.
 
On the contrary, I'm here for a reasonable discussion. Okay, I see that now, my bad. And I see a little more clearly what the intent in this article was now. Your a graphics guy, and making graphics comparisons is totally fair. But I think your bias is coming out in your article and rhetoric. I could be wrong. My bias was definitely in my comment. My point is that Apple markets these as power efficient chips and it's unfair to make it more than that. If you look at the graphs Apple makes to demonstrate the performance of the M1 and M2 chips, they are designed to highlight the power consumption based on similar performance of other chips. They don't always specify what they are comparing it to or don't always show the full processing power of the compared chip, but that's not the point. They are simply trying to demonstrate power efficiency, not raw power. However, because of the integrated nature of their own SOC combined with the rest of their hardware and software, people are getting real world performance that exceeds expectations.
Fair enough, but the big issue with Apple is that they set up straw men to then knock down.

"3.3 times faster GPU than the last Intel-based MacBook Pro!"

Yeah, but SO WHAT!? The previous Intel MacBook Pro had a totally anemic Intel GPU that can't even come close to competing with anything AMD and Nvidia make. As I've pointed out, if Apple were interested in a real showdown, it would be doing comparisons against the AMD Ryzen 7 6800U, which is already available and delivers good battery life, high graphics performance, and has eight Zen 3 CPU cores. But Apple isn't interested in "fair" comparisons, nor does it want to even acknowledge that the dominant PC hardware ecosystem exists.

M1 is a great chip for its intended purpose, and M2 builds on that, albeit in incremental ways. However, the benchmarks Apple uses often include doing video editing / transcoding where its latest additions (like ProRes acceleration) can make it run circles around the competition. It's like comparing performance with an AVX-512 accelerated application against the same task accomplished using standard FP64 instructions on an older CPU. "Look, we're 10X more efficient and faster!" (AMD and Intel have done basically exactly that in the past, though, so Apple isn't alone...)

There's also a big question on actual power use from the MacBooks with M1 and M2. Apple often uses a best-case efficiency scenario to highlight its advantages, just like AMD, Nvidia, and Intel. Nvidia said the Ampere architecture was twice the performance per watt as Turing, except that was only if you happened to compared a power optimized (e.g. a mobile chip) against a performance focused desktop part. The actual retail products aren't really much better in terms of performance per watt. RTX 3080 is about 25–30% faster than RTX 2080 Ti, and uses 27% more power. LOL.

Anyway, I am a GPU guy so that was my focus, mostly because I was irritated by Apple's GPU comparisons with meaningless hardware. I really want to get an M2 MacBook in hand and do some serious GPU testing. The problem is that the options for comparing Windows to Mac performance for GPUs are far more limited, often requiring the use of completely synthetic benchmarks like GFXBench, 3DMark, Geekbench, etc. I don't use Macs (at all), but if I can borrow one for a bit I'll have to see what recent Steam games work on both Windows and Mac and try to put together some comparisons. I suspect it will end up being GPU performance about on par with the Ryzen 6800U.