Review Asus Zenbook S 13 OLED Review: Ryzen 6800U Goes Thin

abufrejoval

Reputable
Jun 19, 2020
601
435
5,260
Too bad you didn't do gaming, but it reads like a very decent competitor in the segment.

My GF is in the market for these laptops, so she'll love to have an AMD option as she hates the HP Envy with the nVidia graphics, haha.

Regards.

The iGPU is the main differentiator of the 6800U vs the 5800U, so to leave it out is missing the USP: very disappointing!

I understand that you want to publish fast and that a proper GPU evaluation may be more involved, so I can only hope that will come very soon.

It's especially interesting because it will offer some hints on what will be possible in terms of graphics with the next generation APUs based on Zen4 but a similar iGPU design.

For me RAM upgradability is a major concern, because the ability to run various VMs is critical so I like spending a reasonable extra for 64GB. Unforunately notebook vendors are rarely reasonable when it comes to the price for extra RAM and of course it doesn't help that the form factor and LPDDR5 favor soldered RAM chips and nobody wants to stock high-capacity SKUs.

So I wonder if at least this time around a 32GB SKU is available?
 
  • Like
Reactions: prtskg
Too bad you didn't do gaming, but it reads like a very decent competitor in the segment.

My GF is in the market for these laptops, so she'll love to have an AMD option as she hates the HP Envy with the nVidia graphics, haha.

Regards.
Andrew was a bit under the gun to get the review up, but we've added some graphics tests after the fact. Unfortunately, we don't have comparable performance data from other laptops we've reviewed, as that hasn't been something deemed "important" on non-gaming laptops. We're planning to test the graphics performance of some of the other budget "non-gaming" laptops going forward, to build out our suite of results. Right now, it looks like the Radeon 680M is about 80% faster than Intel's Iris Xe 96 EU graphics solution, which is used in both Alder Lake and the older Tiger Lake. It's also a big step down from the sort of performance you'd get from even a GTX 1650 laptop solution.
 
  • Like
Reactions: -Fran-

deesider

Distinguished
Jun 15, 2017
308
147
18,890
Andrew was a bit under the gun to get the review up, but we've added some graphics tests after the fact. Unfortunately, we don't have comparable performance data from other laptops we've reviewed, as that hasn't been something deemed "important" on non-gaming laptops. We're planning to test the graphics performance of some of the other budget "non-gaming" laptops going forward, to build out our suite of results. Right now, it looks like the Radeon 680M is about 80% faster than Intel's Iris Xe 96 EU graphics solution, which is used in both Alder Lake and the older Tiger Lake. It's also a big step down from the sort of performance you'd get from even a GTX 1650 laptop solution.
For a relatively high resolution screen like 2880 x 1800 would you consider doing a half-resolution test (1440 X 900), since I assume it would be less blurry than a re-scaled 1600 X 900? Or does it make little difference?
 

abufrejoval

Reputable
Jun 19, 2020
601
435
5,260
Well APUs aren't just about gaming, a message that AMD has been trying to push out for a long time, even if HSA remains a sad and unfullfilled promise still.

And not even 3D is all about gaming, advanced visualization profits from it as well.

Personally I just love looking at Google Maps in the "earth mode" with Globe View enabled and satellite data being rendered into 3D by some nifty AIs.

Unfortunately my favorite browser, Firefox there does much worse than Chromium based ones like Brave but instant response to shifts, zooms and rotations on a 4K screen are possible with these beefier iGPUs. BTW it does much better then the orignal standalone Google Earth application, which evidently never got the really smart code. And it fully puts µsoft's FlightSimulator to shame, both in terms of map quality and 3D performance... but that's another story.

Intel has long suffered from the same attitude as you, thinking that iGPUs don't need or systematically can't really offer great 3D performance, because the required RAM bandwidth simply isn't there.

And AMD then has jumped the gun on Intel two times, a) by offering APUs that did offer much better iGPU performance at similr prices (e.g. Kaveri) and b) by allocating iGPU silicon real-estate to CPU cores instead, doubling core vs. Intel at the same process size and price point (Zen 1/2/3).

It was only on behest of Apple demanding better iGPU performance that Intel started their Iris Plus/Pro souped up 48/96 vs. the normal 24EU iGPUs which put 64/128MB of EDRAM on the die carrier to achieve the bandwidth required to make use of the extra GPU cores.

I have hardware from all these generations. E.g. a Kaveri A10-7850K with 512 SMs that required optimal DDR3-2400 DIMMs to benefit and did only marginally better than the 384 SM variant because it was starved for bandwidth.

It was pretty awsome to see how that at 90 Watts almost exactly matched the performance of my Iris Plus equipped Skylake i5-6267U dual-core with HT 48EUs, 64MB EDRAM and DDR3-1333 RAM at 28 Watts both in CPU and GPU workloads.

My Tiger Lake NUC with 96EUs and my Ryzen 5800U based notebook both manage to do without EDRAM and only get slightly better RAM bandwidth but evidently manage to get a lot more GPU power out of the extra SMs.

I don't know if its vastly bigger caches or software drivers that know how to use them in such a way that only the final render has to suffer from the relatively low bandwidth of their DDR4/5 frame buffers. Where 48EUs on a Gen8 NUC with the bigger Iris Plus only offered 50% uplift of performance vs. the 24EU NC Gen10 iGPU, the 96EU iGPU on the Gen11 NUC reached 4x the performance of the 24EU variant or linear uplift.

On GPU workloads or games Tiger Lake was around 20% faster than the 5800U, so the iGPU update on the 6800U should definitely help to bring overall leadership in the 15-28 Watt class of laptops without a dGPU.

And for the 7000 series APUs AMD is repeating the promise it used to make for Kaveri: a reasonable gaming experience, only this time at 1920x1080 when it was 1280x720 on Kaveri.

At 4k even my RTX 2080ti in combination with a Ryzen 9 5950x is hopelessly overtaxed most of the time, but it's really there only to run CUDA not games.
 
Last edited:
For a relatively high resolution screen like 2880 x 1800 would you consider doing a half-resolution test (1440 X 900), since I assume it would be less blurry than a re-scaled 1600 X 900? Or does it make little difference?
The problem with tailoring testing to a specific laptop screen is that it would make it more difficult to compare results. This particular laptop should probably run games at 1440x900, as you suggest, but many laptops have 1080p displays and so that makes more sense as an apples-to-apples testing target. I'd say that if you drop the number of pixels rendered in games by 37.5%, you should expect performance to improve by 15–20%. We could try to test that, but it's more time spent and thus less likely to get done.
 
The problem with tailoring testing to a specific laptop screen is that it would make it more difficult to compare results. This particular laptop should probably run games at 1440x900, as you suggest, but many laptops have 1080p displays and so that makes more sense as an apples-to-apples testing target. I'd say that if you drop the number of pixels rendered in games by 37.5%, you should expect performance to improve by 15–20%. We could try to test that, but it's more time spent and thus less likely to get done.
I would say that maybe testing RSR (via mods or driver) and FSR in it would make sense then?

You can still standardize the testing at a fixed resolution, I guess. Or just make it a Pixel count per frame time to normalize the results. It should be somewhat fair? Like: this laptop takes x-ms to deliver y-MP on screen and that becomes a normalized measure you can use to compare. Not perfect, but it does give perspective in conjunction with the standardized test. You could give the normalization theory a go with regular setups and validate variances so they're taken into account. Hey this doesn't sound too dumb now that I think about it, heh.

Regards.
 
May 21, 2022
1
0
10

watzupken

Reputable
Mar 16, 2020
1,181
663
6,070
Andrew was a bit under the gun to get the review up, but we've added some graphics tests after the fact. Unfortunately, we don't have comparable performance data from other laptops we've reviewed, as that hasn't been something deemed "important" on non-gaming laptops. We're planning to test the graphics performance of some of the other budget "non-gaming" laptops going forward, to build out our suite of results. Right now, it looks like the Radeon 680M is about 80% faster than Intel's Iris Xe 96 EU graphics solution, which is used in both Alder Lake and the older Tiger Lake. It's also a big step down from the sort of performance you'd get from even a GTX 1650 laptop solution.
It is not as fast as a GTX 1650, that is to be expected. After all, this is supposed to be an APU that sustains a 15 to 28W TDP. It is really to see how much of a difference people can expect in 3D applications by moving from the aged Vega to RDNA2. The mobile GTX 1650 will require something like a H series processor to catch up due to higher power limit and also better cooling solution deployed.

As for gaming resolution, there is no miracle here. The TDP simply won’t allow the iGPU and CPU to run at high clockspeed under sustained load. In fact, it is more likely the APU run into thermal limit given the slim laptops these U series processors are used. So the best case scenario is going to be 1080p at low graphic settings. I may be wrong, but even though the aspect ratio of the monitor is not 16:9, you can still apply a 1080p resolution. At most you just get black bars above and below. But that helps standardise testing for ease of comparison.