News AMD 4700S CPU Reviewed: Defective PS5 Chips Find New Life

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
I'm not sure what the intended use of this would be. You can't use it as a netbox. Certainly not enough ports for a descent NAS

POS Kiosk maybe?

But Ryzen 3000G $85
B450MB $80
16 Gigs 3600 Memory $80

$225 + $60 PSU + $50 Case = $355

I'm just not sure the point when a more capable system is available for much less.
 
Last edited:
To be pedantic, it's not "custom", but "semi-custom." This implies AMD is only letting customers choose some things, but not everything, like making modifications to the microarchitecture which would be costly.
Are you 100% sure that's what it implies? I'm not so sure myself.
The benchmarks on the Tom's Hardware article? If it wasn't DirectX 12 compliant, Time Spy wouldn't run.
Depending on how different the API the PS5 uses, porting any benchmark may not be terribly hard, so it doesn't surprise me it can run. It's like saying Time Spy run in Linux natively; you need Proton to bridge the environment gap, which can be done with the PS5 as well easily (since it's already done for Linux).
In any case, if the GPU isn't DX12 compliant, then it's not DX11 complaint either. DX11 GPUs meet the minimum hardware compliance for DX12, evidenced that DX12 supports DX11 feature levels. So again, it makes zero sense to me why AMD would make a GPU that is in no way DX12 compliant, nor would it make sense for Sony to tell AMD to remove hardware features that wouldn't even make it even DX11 compliant. I'm also assuming the same for OGL 4.0 and Vulkan since it wouldn't make sense for either of those to have a disparity in hardware requirements to their closest DirectX competitor.
Hardware design aside, me mentioning DX was just because of how the hardware call are made at the Windows (and Linux) driver end (tying to the above). That's what I really imply here. Your point is well taken and from a "feature spec", they may as well be compatible and developed to be on par across the board with, maybe, additional things not exposed in the DX/Vulkan/OGL APIs. This doesn't mean AMD creating a driver for Windows or Linux is going to be a "trivial port". Plus, I'm sure it would expose more than what Sony would like about their GPU "choices".
I'm pretty sure there were several game studios using either of the two consoles' CPU capabilities as they were intended to be used well before the EOL of the console.
I can't remember exactly, so I don't really mind. I just do remember their CPUs had capabilities that were not being used until late development on each; maybe they were GPU-side features? Anyway, not really important to discuss as it was just a side-comment.

Regards.
 
Are you 100% sure that's what it implies? I'm not so sure myself.
Well, let's take a look at what AMD says about their Semi-Custom Solutions:
We are not bound by convention and don’t subscribe to “one-size-fits-all” thinking. We develop customized SOCs leveraging AMD technology including industry-leading x86 and ARM® multi-core CPUs, world-class AMD Radeon® graphics, and multimedia accelerators. We offer the option to further customize each solution to include third-party and/or open market IP or customer designed IP. Discover how you can differentiate with AMD Semi-Custom solutions.
This doesn't tell me that they're letting other companies make changes to existing AMD IPs i.e., what the processor implements, not the actual implementation of the processor itself. And from AnandTech's analysis way back when:
The idea here is that AMD would then provide those OEMs with a semi-custom SoC, where they could choose their own IP blocks (video decode/encode, CPU cores, GPU, etc...), and help give them competitive parity or ideally even an advantage over the larger players. Counting on the bigger guys having higher overhead, being less agile and having to make larger profits should, at least on paper, give the smaller players a fighting chance.
Which to me sounds like all AMD is offering is "we can make you an SoC, you just tell us what you want from this list of things we can provide." It is not offering companies to modify the microarchitecture of their currently existing IP, because that requires a lot more work and resources than just building something from existing parts.

Depending on how different the API the PS5 uses, porting any benchmark may not be terribly hard, so it doesn't surprise me it can run. It's like saying Time Spy run in Linux natively; you need Proton to bridge the environment gap, which can be done with the PS5 as well easily (since it's already done for Linux).
If the argument you're making here is that the GPU inside the 4700S isn't DirectX 12 compliant because you're somehow convinced it's not an RDNA2 GPU, we don't need to port Time Spy over to the PS5 OS. The fact that it runs on Windows means it has the necessary hardware requirements and software support for DirectX 12 compliance. Granted Tom's Hardware didn't seem to benchmark anything with ray tracing (which would for sure tell us it's RDNA2), but even if the drivers neuter that capability, that doesn't mean the RNDA2 upgrades aren't there.

Also, AMD proudly says that RDNA2 is used in the PS5. Going with the assumption that the 4700S are recycled PS5 APUs, if it isn't RDNA2, then AMD is lying to us.
 

TJ Hooker

Titan
Ambassador
I think the fact that they have a DirectX 12 benchmark proves that the iGPU is DirectX 12 compliant.
All the benchmarks in this article are run on a dGPU. The iGPU on the 4700S is completely disabled. The fact that they ran a DX12 benchmark just proves that the dGPU they paired it with supports DX12.

But I do agree that the PS5 GPU likely uses the same, or nearly the same, architecture as regular RDNA2 and would therefore support the same APIs (so long as the required drivers exist).
 
Last edited:
Well, let's take a look at what AMD says about their Semi-Custom Solutions:

This doesn't tell me that they're letting other companies make changes to existing AMD IPs i.e., what the processor implements, not the actual implementation of the processor itself. And from AnandTech's analysis way back when:

Which to me sounds like all AMD is offering is "we can make you an SoC, you just tell us what you want from this list of things we can provide." It is not offering companies to modify the microarchitecture of their currently existing IP, because that requires a lot more work and resources than just building something from existing parts.


If the argument you're making here is that the GPU inside the 4700S isn't DirectX 12 compliant because you're somehow convinced it's not an RDNA2 GPU, we don't need to port Time Spy over to the PS5 OS. The fact that it runs on Windows means it has the necessary hardware requirements and software support for DirectX 12 compliance. Granted Tom's Hardware didn't seem to benchmark anything with ray tracing (which would for sure tell us it's RDNA2), but even if the drivers neuter that capability, that doesn't mean the RNDA2 upgrades aren't there.

Also, AMD proudly says that RDNA2 is used in the PS5. Going with the assumption that the 4700S are recycled PS5 APUs, if it isn't RDNA2, then AMD is lying to us.
Er... But the same quote you provided states: "or customer designed IP" xD

It can be based of RDNA2 principles and main building blocks, but that doesn't mean it's fully compatible with DX12 or Vulkan without a compatibility layer in the driver for each API. Or, to insist a bit more: it could be RDNA2 at the base, but Sony added things they wanted in the GPU portion that cannot be exposed easily or due to legal/contractual restrictions. The later seems more likely as to why the iGPUs are disabled, but I would not be surprised it's just cost to make them work with the main driver stack they have now.

EDIT: Forgot to mention that, if they went with just the RDNA2 GPU "as is", then I don't see any reason to disable the iGPU portion at all. That is what is bugging me here and makes me suspect it's not a "vanilla" RDNA2 design as what you'd see in the Navi family. Many options, all valid assessments as to why the iGPU was disabled.

Regards.
 
Last edited:
Er... But the same quote you provided states: "or customer designed IP" xD

It can be based of RDNA2 principles and main building blocks, but that doesn't mean it's fully compatible with DX12 or Vulkan without a compatibility layer in the driver for each API. Or, to insist a bit more: it could be RDNA2 at the base, but Sony added things they wanted in the GPU portion that cannot be exposed easily or due to legal/contractual restrictions. The later seems more likely as to why the iGPUs are disabled, but I would not be surprised it's just cost to make them work with the main driver stack they have now.

EDIT: Forgot to mention that, if they went with just the RDNA2 GPU "as is", then I don't see any reason to disable the iGPU portion at all. That is what is bugging me here and makes me suspect it's not a "vanilla" RDNA2 design as what you'd see in the Navi family. Many options, all valid assessments as to why the iGPU was disabled.

Regards.
Changing the architecture in any meaningful way would require a lot of effort, and there's little reason to expend that time and energy. AMD's semi-custom SoC approach generally means it will give you AMD GPU cores (CUs) and AMD CPU cores, and the main differentiator is that the customer specifies how many of each. Then it's all wrapped up with memory interfaces, other logic, etc. Theoretically, Sony could add some custom IP blocks that offload certain work, but by the very nature of an SoC, that work wouldn't be graphics or normal CPU stuff. Probably there's some extra audio hardware, and some extra storage stuff.

This is definitely nothing like the Cell architecture days. That proved that even when something had a large theoretical level of performance, if it was difficult to extract the performance, it wasn't worth doing. PS4 totally reversed course for Sony and was basically straight up PC hardware, tied together as a "semi-custom" chip. That's because the Xbox 360 was generally better received than the PS3, at least on some level among developers. Even after nearly a year, no one has come out with any clear explanation (or demonstration) of the "custom" stuff Sony added to the PS5, other than for storage.
 
Er... But the same quote you provided states: "or customer designed IP" xD
Customer designed IP means just that, IP blocks that the customer designed. I'm pretty sure Sony doesn't have access to any of AMD's GPU microarchitecture and thus, can make a GPU used in an AMD SoC with a different RDNA2 core.

It can be based of RDNA2 principles and main building blocks, but that doesn't mean it's fully compatible with DX12 or Vulkan without a compatibility layer in the driver for each API. Or, to insist a bit more: it could be RDNA2 at the base, but Sony added things they wanted in the GPU portion that cannot be exposed easily or due to legal/contractual restrictions. The later seems more likely as to why the iGPUs are disabled, but I would not be surprised it's just cost to make them work with the main driver stack they have now.
Adding stuff to RDNA 2 doesn't make it non-compatible with Vulkan or DirectX 12. And anything that Sony added can be burnt off with a laser so that AMD doesn't get into hot water with Sony. Does adding tensor cores on NVIDIA's GPUs make it non DX12 compliant because it's non-standard?

EDIT: Forgot to mention that, if they went with just the RDNA2 GPU "as is", then I don't see any reason to disable the iGPU portion at all. That is what is bugging me here and makes me suspect it's not a "vanilla" RDNA2 design as what you'd see in the Navi family. Many options, all valid assessments as to why the iGPU was disabled.
And I'll admit something here, I missed the part where the iGPU was made inaccessible. So I do take back my comments about at least the DirectX 12 benchmarks applying to the iGPU.

However, at the end of the day, Sony asking AMD to make a microarchitecture specific to them doesn't make sense. It takes a lot of work to make a new mircoarchitecture and considering that there's nothing in the PS5 graphically speaking that's giving them an edge over the XBSX, that's a lot of money wasted for a customization like that.
 

watzupken

Reputable
Mar 16, 2020
1,030
521
6,070
A $450 modest GPU + weak CPU locked down rig where you can only upgrade the storage isn't really going to kill off AMD's APU market. It would have been cool to be able to test this chip with the GPU still enabled, in Windows, but then we'd have all sorts of driver stuff to deal with, and very likely the GPU wasn't working properly in the first place. Of course, I doubt the entire GPU was bad, so AMD could have potentially harvested a 24-30 CU chip. But that may not be allowed based on Sony contracts. Most likely, the PS5 requires 36 CUs and anything that can't reach that level has to be sold with the GPU disabled. But there's no reason the GPU wouldn't be DX12 compliant with the right drivers, as it's still just RDNA2.
I agree. If AMD can salvage the iGPU by disabling some cores, it would have made this system interesting and worth considering as an alternative to the current APU setups. Without it, it is technically no different from buying a Zen 2 processor with no GPU. Worst still, there is no much you can upgrade on this board, i.e. fixed RAM, proprietary cooling mounts, etc, just don't seem worth paying the high price. Performance is also lacklustre in most cases.
 
Sep 28, 2021
1
0
10
The 16GB GDDR6 memory is interesting, if nothing else it might be a decent budget PC compared to other pre-builts if the drivers and BIOS aren't wonky. The video card whitelist is very limited, but the RX 590 should be decent for 1080p
 

zodiacfml

Distinguished
Oct 2, 2008
1,228
26
19,310
As I have said, they're obviously competing for devices with powerful CPUs with integrated graphics like the APU boxes, or an Apple Mini. I did not understand why there's so much testing it with a powerful discrete card considering its severely crippled Pcie slot, small PSU, etc. It would make sense to benchmark the product as it comes out of the box and compare it with other products within its price range.

I'm not sure what you're talking about here -- did you post in the wrong thread?
 
As I have said, they're obviously competing for devices with powerful CPUs with integrated graphics like the APU boxes, or an Apple Mini. I did not understand why there's so much testing it with a powerful discrete card considering its severely crippled Pcie slot, small PSU, etc. It would make sense to benchmark the product as it comes out of the box and compare it with other products within its price range.
That's the problem, though. This is nothing at all like a Mac Mini or Apple TV. Those use hardware that consumes maybe 10W, highly optimized for overall efficiency. The M1 is competitive with Intel's 15W ultra low voltage CPUs in some ways, but slower in others. Compared to an 8-core Intel or AMD CPU, though, it's far slower — and uses 1/10 the power. Paul could have tested with a crappy low-end GPU, and then the CPU would have been less of a bottleneck in games. But we already know that's the case.

An RX 560 generally outperforms even the fastest GPU in AMD's APU lineup (not including the PS5 or XBSX processors). The point isn't to be able to outperform some arbitrary low-end level of performance. The point is to figure out where the bottlenecks lie. If you look at the charts, you can see that maximum theoretical gaming performance of the 4700S falls below even the 3400G. That's pretty awful. It's basically at the level of the Core i7-6700K in gaming performance, a chip from nearly six years ago. The high latency of GDDR6 isn't a problem for graphics so much, but it's brutal on many CPU workloads.

In short, if you want a compact and efficient system, the Mac Mini has plenty to offer — just not so much in the way of PC gaming. If you want an all-in-one small gaming box, there are many better options than anything built around the 4700S. It's probably worth about $400 still, just because it does have an 8-core/16-thread CPU that's not terrible, and 16GB of memory. But there are so many ways to build a better PC for the cost of the 4700S systems. Pretty much use a Ryzen 5 5600G and you'll get a much better overall setup, that can be upgraded and configured with far better components.
 

zodiacfml

Distinguished
Oct 2, 2008
1,228
26
19,310
I guess overlooked that 3400G in the charts because I know from memory that the best iGPUs/APUs are slightly inferior/equal than the low end cards such as the RX 550 or 560.
While I don't think that low PCIe bandwidth could restrict the performance of a weak discrete card that much, I want to be proven wrong because AMD nor MaiBenBen can't release a product this bad. It should be able match or beat the Minisforum 8 core products or 8-core Ryzen laptops.


That's the problem, though. This is nothing at all like a Mac Mini or Apple TV. Those use hardware that consumes maybe 10W, highly optimized for overall efficiency. The M1 is competitive with Intel's 15W ultra low voltage CPUs in some ways, but slower in others. Compared to an 8-core Intel or AMD CPU, though, it's far slower — and uses 1/10 the power. Paul could have tested with a crappy low-end GPU, and then the CPU would have been less of a bottleneck in games. But we already know that's the case.

An RX 560 generally outperforms even the fastest GPU in AMD's APU lineup (not including the PS5 or XBSX processors). The point isn't to be able to outperform some arbitrary low-end level of performance. The point is to figure out where the bottlenecks lie. If you look at the charts, you can see that maximum theoretical gaming performance of the 4700S falls below even the 3400G. That's pretty awful. It's basically at the level of the Core i7-6700K in gaming performance, a chip from nearly six years ago. The high latency of GDDR6 isn't a problem for graphics so much, but it's brutal on many CPU workloads.

In short, if you want a compact and efficient system, the Mac Mini has plenty to offer — just not so much in the way of PC gaming. If you want an all-in-one small gaming box, there are many better options than anything built around the 4700S. It's probably worth about $400 still, just because it does have an 8-core/16-thread CPU that's not terrible, and 16GB of memory. But there are so many ways to build a better PC for the cost of the 4700S systems. Pretty much use a Ryzen 5 5600G and you'll get a much better overall setup, that can be upgraded and configured with far better components.