News Apple's M1 SoC Shreds GeForce GTX 1050 Ti in New Graphics Benchmark

Wow. Only thing in the low power segment mobile space that can come close to this is TigerLake U, but looks like this it's a bit faster in every single way, and lower power to boot.
 
These benchmarks don't necessarily tell the whole story. Perhaps they can get these numbers in a synthetic benchmark like this when the CPU cores are sitting mostly idle, but what happens when those cores are active and fighting for the available 10 watts of TDP headroom? And we don't even know if these numbers were achieved at that TDP, or in some device working with higher TDP limits. Or what sort of hardware and software environment the 1050 Ti submission was operating in. If they didn't even have a 1650 submission, a card that's now been on the market for 19 months, then it's a bit hard to take this benchmark seriously.

However, if the in-house SoC lives up to the hype, casual gaming could be a reality on the upcoming M1-powered devices.
What kind of games are going to be able to run natively on this hardware? Angry Birds? It might be great at running ports of smartphone games, but something tells me that support for ports of PC and console games is going to get significantly worse on Macs than it already is, at least in the short-term.
 
Wow. Only thing in the low power segment mobile space that can come close to this is TigerLake U, but looks like this it's a bit faster in every single way, and lower power to boot.

It's a fantastic result but trusting GFXBench and applying that to actual gaming is misleading at best.

-According to Notebookcheck, the Geforce 1050 Max-Q is only 10% faster in Aztec Ruins High and 50% faster in Normal compared to the Xe G7, the GPU in Tigerlake
-In games however, the Geforce 1050 Max-Q is 2.5-3x the performance of the same Xe G7
-I will also mention the only Xe G7 result for GFXBench is using the Asus Zenbook Flip S UX371, which is the poorest performing Xe G7. In games the top Xe G7 system, the Acer Swift is 50% faster.

Interim summary: 1050 Max-Q is 10-50% faster than Xe G7 in GFXBench, but 2.5-3x as fast in games

-Anandtech notes that mobile systems use FP16, while PCs are using FP32, meaning the mobile systems have an effective 2x Flop advantage. Instant 30-40% performance gain
-The GTX 1050 Ti is higher end than 1050 Max-Q plus its a desktop part and should be even faster!

Interim summary 2: You can take the Apple M1 result and divide it by 1.3-1.4x, instantly making it slower than the 1050 Ti due to rendering differences.

Based on the actual game differences, the 1050 Ti is still going to end up being 2x the performance of Apple M1.

Conclusion: Take GFXBench results with a heap of salt. Apple's results are fantastic, considering its at a 10W power envelope which includes the CPU.

Saying its faster than 1050 Ti Desktop is misleading, because in actual games it'll be way, way ahead.
 
Last edited:
Comparing a product that's releasing to something from 4 years ago and calling it a flex is quite funny. Seriously, if Apple can't beat the upcoming 3060(ti) or some other modern piece of hardware it's not really a flex.
 
ad... sorry article... paid for by apple. also, emu... and also... only a 1050 ti? lol. ok "whoaaaaa". weird flex.

I don't think the articles that bad but they should do a better job of pointing out the short comings of this test. I will be interested to see more suitable/varied benchmarks and real world behavior of this chip. I have long said that if ARM based CPUs would catch up to x86, assuming they ever did, it would be around the 5nm mark. And my statement goes back nearly 18 years ago when my mom was looking in to ARM stock at the time. She asked me if ARM based CPUs had a realistic chance of catching Intel/AMD x86 chips. On of her techs at work was an ARM enthusist who wrongly insisted ARM would beat x86 within 10 years and be powering everything from Linux to Windows on desktops/laptops/servers as the primary CPU architecture in use. dude got some of it right and certainly had the right idea if not the wrong time frame. Point being I told her if I was correct the investment would not be a bad thing and if ARM didn't beat x86 they would still have a strangle hold on the lower power/mobile market. so either way the investment would likely be solid. Looks like I was right for the most part...
 
Comparing a product that's releasing to something from 4 years ago and calling it a flex is quite funny. Seriously, if Apple can't beat the upcoming 3060(ti) or some other modern piece of hardware it's not really a flex.
It's also an example of what is possible when you aren't competing against yourself. AMD, and heading forward Intel, aren't going to make iGPU as fast as they can because that would prevent them from selling lower end dGPU's. It will be many years before we see an AMD CPU with an iGPU as fast as what the new consoles have. iGPU's for PC's are basically only designed to be fast enough for business desktops and to speed up encoding/decoding tasks. Apple doesn't have to worry about cannibalizing their own sales and can just make things the best they can for the target price.
 
Are we supposed to be impressed? First you says it's faster than 8th gen i5, now you say it's faster graphically than a 1050 Ti. These are not big achievements. You really need to set the bar higher.
 
"1050ti"

is that supposed to be shocking? that was a card that shouldnt of existed to begin with :/

That card served a purpose, to bring reasonable framerate, 1080p gaming to low power systems that can't run a high end video card. It was brilliant marketing, covering a hole in the low price, prebuilt market. There are a lot of Core i7 Dell and HP machines out there with crappy power supplies that have enough CPU power for gaming, but can only run a 75W video card AND the 1050 Ti annihilates AMD's entry in that market, the RX 560. The performance gap got even wider when Nvidia replaced the 1050 Ti with the 1650.
 
Last edited:
It will be many years before we see an AMD CPU with an iGPU as fast as what the new consoles have.
Sure, but an iGPU as fast as that probably wouldn't make much sense in today's systems. To start, those APUs can draw a couple-hundred watts under load, making them less practical for laptops and other compact devices, at least without massively increasing the graphics core count (and in turn price) to run them at lower clocks. And at that point, you are pretty much doing what dedicated GPUs in laptops already do.

And for desktops, you are giving up the ability to upgrade the CPU and GPU independently from one another, along with potentially the RAM, so if you replace one component, you would need to replace all the others along with it. Effectively, that would make upgrades largely obsolete, like in many of today's laptops, meaning if you want more performance, you would need to replace all of them, and might as well buy a new system, even if some components are still providing adequate performance.

You would also be placing all the heat output from both devices together in the CPU socket area, making cooling more difficult, at least without a move away from the ATX form factor. There's also the question of whether it would even reduce the cost of having two separate components by a substantial amount. Aside from being able to share a pool of fast memory, the benefits of soldering the CPU and GPU together seem questionable. Since the GPU portion would be the most expensive to produce, you would effectively be selling a GPU with a CPU attached, rather than the other way around.
 
Sure, but an iGPU as fast as that probably wouldn't make much sense in today's systems. To start, those APUs can draw a couple-hundred watts under load, making them less practical for laptops and other compact devices, at least without massively increasing the graphics core count (and in turn price) to run them at lower clocks. And at that point, you are pretty much doing what dedicated GPUs in laptops already do.

And for desktops, you are giving up the ability to upgrade the CPU and GPU independently from one another, along with potentially the RAM, so if you replace one component, you would need to replace all the others along with it. Effectively, that would make upgrades largely obsolete, like in many of today's laptops, meaning if you want more performance, you would need to replace all of them, and might as well buy a new system, even if some components are still providing adequate performance.

You would also be placing all the heat output from both devices together in the CPU socket area, making cooling more difficult, at least without a move away from the ATX form factor. There's also the question of whether it would even reduce the cost of having two separate components by a substantial amount. Aside from being able to share a pool of fast memory, the benefits of soldering the CPU and GPU together seem questionable. Since the GPU portion would be the most expensive to produce, you would effectively be selling a GPU with a CPU attached, rather than the other way around.
What's possible and what makes business sense aren't necessarily the same set of products. I already said in my original post that a high powered iGPU doesn't make sense for AMD or Intel. Why bother counterarguing a point I already said doesn't make sense?

Not sure why you would sweat the ergonomics of an APU that is being sold in products today. Sony (7.2L) and MS (6.86L) both made it work in what amounts to SFF cases. The entire PS5 console reportedly peaks at 200W. You can't put that in an ultrabook like a Macbook Air, but a sub 200W APU doesn't require prayers and magic to cool in a standard ATX form factor. A 10900K by itself can peak well above 200W. It's possible to stuff a 3090FE with stock cooler and 10900k into a sub 10L case and still have reasonable temperatures and no throttling.
 
Last edited:
It's also an example of what is possible when you aren't competing against yourself. AMD, and heading forward Intel, aren't going to make iGPU as fast as they can because that would prevent them from selling lower end dGPU's. It will be many years before we see an AMD CPU with an iGPU as fast as what the new consoles have. iGPU's for PC's are basically only designed to be fast enough for business desktops and to speed up encoding/decoding tasks. Apple doesn't have to worry about cannibalizing their own sales and can just make things the best they can for the target price.

When you look at discreet graphics cards, they are quite expensive, but most of that expense comes from all the components it takes to make a graphics cards. Not only that, margins also go to their board partners where AMD doesn't make any money at all. AMD is only interested in selling their silicon. So I don't think they'll ever neuter their APU's. If they're worried about profit margins, they can always charge more for their APU's and play with how many CU's they put into each chip to offer a spread in pricepoints.
 
I don't believe that the article mentioned that the total power of the ENTIRE M1 is only about 10W. The M1 includes the CPU and GPU (along with a bunch of other components with the SoC). The fact that the M1 gets anywhere close to these desktop GPUs in pretty insane given those thermal constraints. The better overall comparison might be with the GeForce MX250 since it has a TDP of 10W. Just looking at the Aztec Run High Tier, the MX250 runs up to about 55fps while the M1 runs at 77fps. That's amazing considering that the MX250 runs at the same power as the entire M1 SoC.
 
When you look at discreet graphics cards, they are quite expensive, but most of that expense comes from all the components it takes to make a graphics cards. Not only that, margins also go to their board partners where AMD doesn't make any money at all. AMD is only interested in selling their silicon. So I don't think they'll ever neuter their APU's. If they're worried about profit margins, they can always charge more for their APU's and play with how many CU's they put into each chip to offer a spread in pricepoints.
Nvidia seems to have no problem making money selling GPU's to board partners. AMD is certainly not selling GPU's at cost to board partners. It seems reasonable to assume that AMD has determined there is more money to be made by not including iGPU's in most of their Ryzen CPU's which forces dGPU purchases by users and AMD hopes to pick up as many of those as they can. That's clear as day their strategy with the 5000 CPU's and RDNA2 when they introduced SAM trying to incentivize people bundling them together.
 
Just looking at the Aztec Run High Tier, the MX250 runs up to about 55fps while the M1 runs at 77fps. That's amazing considering that the MX250 runs at the same power as the entire M1 SoC.
The MX250 is using the over 4 year old Pascal architecture while running on a 14nm node. The M1 is brand new and utilizing TSMC's new 5nm node. It would be more amazing, if there wasn't a giant improvement in efficiency.
 
The MX250 is using the over 4 year old Pascal architecture while running on a 14nm node. The M1 is brand new and utilizing TSMC's new 5nm node. It would be more amazing, if there wasn't a giant improvement in efficiency.
The MX250 was the newest one that had the 10W TDP. All of the laptop GTX and RTX ones had 30+ W ranges. Seemed kind of silly to compare a <10W GPU to even a 30W GPU.
 
  • Like
Reactions: martinch
The initial benchmarks (synthetic and gaming) seem to be bearing out the initial headlines. The M1 appears to be faster than any other integrated GPU and approaching or even surpassing low-level desktop GPUs (560X and 1050) even when the game is running x86 code. All of this power with a total thermal budget in a mac mini of ~20-25W. Not too shabby.
https://www.anandtech.com/show/16252/mac-mini-apple-m1-tested
 
The MX250 was the newest one that had the 10W TDP. All of the laptop GTX and RTX ones had 30+ W ranges. Seemed kind of silly to compare a <10W GPU to even a 30W GPU.
And it's equally silly to compare a 5nm GPU to an over 4 year old 14nm one. There was an MX330 10W and MX350 10 W as well. You don't see 10W dGPU's any more because they make no sense. The power limit is so low that they aren't going to be much faster than contemporary iGPU's while adding a sizable cost for the OEM.
 
And it's equally silly to compare a 5nm GPU to an over 4 year old 14nm one. There was an MX330 10W and MX350 10 W as well. You don't see 10W dGPU's any more because they make no sense. The power limit is so low that they aren't going to be much faster than contemporary iGPU's while adding a sizable cost for the OEM.

I would agree with that statement, but the fact that an iGPU is even approaching a desktop-class dGPU (even an old one) is pretty impressive. I suppose we are all just used to the Intel's iGPU crap sandwiches after all of these years. Even AMD's R7-4700U is not as fast as the M1, though. I'm amazed that nvidia has not worked with TSMC to shrink their GPUs even to 7nm. It's not like Apple and AMD have a monopoly on that technology.
 
I would agree with that statement, but the fact that an iGPU is even approaching a desktop-class dGPU (even an old one) is pretty impressive. I suppose we are all just used to the Intel's iGPU crap sandwiches after all of these years. Even AMD's R7-4700U is not as fast as the M1, though. I'm amazed that nvidia has not worked with TSMC to shrink their GPUs even to 7nm. It's not like Apple and AMD have a monopoly on that technology.

No it is not impressive .their iGPU is using a 5nm process . it is not 1 to 1 comparison. at all. and it is rated at 2.x Tflops ... if you want to talk about iGPU , look at the consoles today running at 12 Tflops . and even 2017 ones running at 6 tflops ..

what 2.x Tflops they are happy with ? and using 5nm above that?

Keep in mind that Apple is not even using GDDR6/5 for VRAM and that chip memory bandwidth is not good and very limited.

I am really sad that Apple knowing their chip Design Memory capacity is maxed out at 16GB , and yet they did not bother to use GDDR6 for it to speed things up to like ~4tflops. oh and they offer 8GB and 16 GB versions (LOL) ..

By the way , AMD could release a very speedy APU for the PC , but they are keeping it for the future . and their consoles chips proves it .
 
Last edited: