News PlayStation 5 Pro specifications thoroughly explained, 'FLOPflation' debunked by PS5 system architect Mark Cerny

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
It may not be the same for you but to me it tells me Sony has moved forward with the PlayStation 6 so I see it one as the same. Any benefits the PS5 Pro offers gamers is a side thought. No reason for me to re-up paying $700 + plus disc drive + stand. I've seen this playbook play out with the PS4 Pro.
You thinking the colors blue and red are the same color doesn't make it true.
 
Nintendo is known for squeezing blood from a turnip. They have pulled off some incredible stuff on hardware that was probably less powerful than the PlayStation 3 and Xbox 360.
Yes, their games are impressive for the hardware they've got, but there have been some N64 hacks where fans have managed to squeeze way more out of that machine than even Nintendo did! Of course, N64 was from an era when they were still building high-horsepower hardware.

I think the real distinction is whether you're content to make cartoony games vs. achieving some degree of realism. If pseudo-realism is the goal, then incremental improvements require exponentially more horsepower.

I'm still very surprised if true they are doubling down on a similar design as the Switch. They usually do something totally different each generation.
Maybe they tried to do something with AR, but eventually gave up on it? That would tie in with Switch being portable and the whole Pokemon Go craze.
 
It isn't the same thing. Sony designates lifespan stages solely by sales volume. It does not give any indication of support cycle. For Sony, the latter stage of its life cycle means that sales of the console have begun to decline. That doesn't mean the console is about to get replaced.
Just to add to the discussion (I think?), the video starts out with Mark Cerny referencing their typical 7-year cadence and the Pro's place within it. He doesn't go so far as to confirm the PS6 will hold to this cadence, but his emphasis on the precedent being continued by the PS5 Pro seems like a tacit confirmation that they're still on a roughly 7-year product cycle.

The absolute soonest we would see a PS6 would be RDNA5.
As I mentioned in my longer post, above, the video really throws doubt on the idea that Sony will continue down the path of simply tweaking the latest AMD GPU. Not only did the PS5 Pro add heavy modifications into their RDNA2-derived GPU of the base PS5, but they also pointed out that moving to RDNA3 would break compatibility with all PS5 games. RDNA2 has backwards compatibility with GCN (via Wave64 mode), which is how PS4 games still work.

Furthermore, the video ends with Mark giving a look forward. He points out that raster performance has been optimized to the point that further improvements are only possible via more shaders and/or memory bandwidth. He's more hopeful about ray-tracing, where he thinks we should see some substantial optimizations in the coming decade. He's clearly most excited about AI, where he seems to think the greatest potential for improvement lies.

So, my takeaway is that it's not a given Sony won't just keep using the PS5's fork of RDNA2. The only reason to think otherwise is Project Amethyst, where they're collaborating with AMD on a deeper rethink about the best hardware architectures for the sorts of AI Sony thinks their Playstations need. Interestingly, this collaboration is bidirectional, where it could even influence coming AMD GPUs.

BTW, Micron recently released a slide with their roadmap of memory technologies and data rates through 2028:

jWsjmdRzZv4LxGz4HTh5XE.png


It could give us an idea of what sort of memory bandwidth the PS6 will have, when it launches. For cost reasons, I'd be a little surprised if Sony goes beyond 256 bit. By 2027, they could use 24 Gbps GDDR7, which would take them from 448 GB/s (base PS5) to 768 GB/s (or maybe higher, since the slide says 24+ Gbps).

I don't think it'll be cost-effective for them to get enough bandwidth from on-package LPDDR6, since they'd need to implement at least 6 stacks of it. So, my bet is they'll still be looking at GDDR7, with maybe a slow-path DDR5 pool (like the PS5 Pro added) mainly to free up a little more capacity and bandwidth in the GDDR7. While the secondary pool sounds expensive, it's something they could eliminate in cost-optimized respins, as GDDR7 of higher capacity and bandwidth become available.
 
  • Like
Reactions: thestryker
I'd be a little surprised if Sony goes beyond 256 bit. By 2027, they could use 24 Gbps GDDR7, which would take them from 448 GB/s (base PS5) to 768 GB/s (or maybe higher, since the slide says 24+ Gbps).
I'd be shocked if they went beyond 256-bit quite frankly. Enough of the rumors regarding the RTX 50 series have said they'll have at least 28Gbps GDDR7 to be believable which means 32Gbps (Samsung's original press release cited 32Gbps) is probably sampling. I think anything in that range is on the table which as you say means a significant bandwidth increase which could be as high as double the base PS5.
So, my takeaway is that it's not a given Sony won't just keep using the PS5's fork of RDNA2. The only reason to think otherwise is Project Amethyst, where they're collaborating with AMD on a deeper rethink about the best hardware architectures for the sorts of AI Sony thinks their Playstations need. Interestingly, this collaboration is bidirectional, where it could even influence coming AMD GPUs.
I think whatever Microsoft does with their next generation hardware will likely give us some insight (rumors all point towards 2026). It's possible that their software stack is more robust given that the current Xbox OS seems to basically be a Windows fork which may give them more leeway hardware wise. I just don't really think the two will be markedly different though a focus on RT and AI could certainly shape how the overall APUs look.

With the way AAA graphics have been going my thoughts on next gen consoles has somewhat shifted. I wouldn't be surprised if there was an emphasis on RT over baked lighting. At this point I also expect Microsoft to follow in Sony's footsteps with their own hardware supported upscaling technology (I would love for them to add upscaling and frame generation to DX12 so these could be standardized across hardware). It's just seeming like the software solutions will cost less than throwing more silicon in to brute force the problem.
 
I think whatever Microsoft does with their next generation hardware will likely give us some insight (rumors all point towards 2026). It's possible that their software stack is more robust given that the current Xbox OS seems to basically be a Windows fork which may give them more leeway hardware wise. I just don't really think the two will be markedly different though a focus on RT and AI could certainly shape how the overall APUs look.
If Microsoft held a strict line on game developers using D3D and HLSL for GPU programming, then they won't face the same issues with binary compatibility as Sony.

Sony doesn't use a standard API for their console GPUs, as far as I know. It's custom, low-level, and low-abstraction. And apparently, they let games include pre-compiled shaders. Even if they support Vulkan, that still doesn't address the question of shaders, since Vulkan has no built-in shading language. I'm reading Sony does have a solution similar to HLSL, which they call PSSL, but either it's statically compiled or a significant number of games don't use it.

I also expect Microsoft to follow in Sony's footsteps with their own hardware supported upscaling technology (I would love for them to add upscaling and frame generation to DX12 so these could be standardized across hardware).
They took the same basic approach as Direct3D, by introducing an abstraction layer above the vendor-specific solutions, for games to use.

Also, this: https://www.tomshardware.com/video-...-upscaling-api-for-dx12-games-gets-an-upgrade
 
  • Like
Reactions: thestryker
They took the same basic approach as Direct3D, by introducing an abstraction layer above the vendor-specific solutions, for games to use.

Also, this: https://www.tomshardware.com/video-...-upscaling-api-for-dx12-games-gets-an-upgrade
This is a half measure though and doesn't really solve the problem from an end user standpoint. It does help make it simpler from a development standpoint, but you still end up with varying quality on the solutions.

(I do understand Microsoft not wanting to pick a winner or anything of that variety though)
 
This is a half measure though and doesn't really solve the problem from an end user standpoint. It does help make it simpler from a development standpoint, but you still end up with varying quality on the solutions.
So, you want the same quality everywhere, with the only variable being performance? I think that's not a given. I think we've been moving away from this model, for a little while, with technologies like VRS (Variable Rate Shading) and even differing texture compression schemes supported on different GPUs. Ray tracing optimization techniques are a further area in which implementations can vary.

Graphics was always an approximation. The pace of brute force performance enhancements is limited, while there have been ripe fruits to pick in the realm of algorithmic optimizations. This cat is already out of the bag, which isn't to say you're entirely wrong about Microsoft trying to impose a standard upscaling model on everyone, but it does seem like each vendor's solution is closely tied to their hardware and DirectSR seems like an acknowledgement of that fact, if not even a resignation to that reality.

BTW, the main reason CAD programs have long favored OpenGL is that its output is well-specified. Direct3D always allowed for some variation, in the interest of performance.
 
So, you want the same quality everywhere, with the only variable being performance? I think that's not a given.
More like the same/similar methods (assuming proprietary tech isn't needed to and from what I understand it shouldn't be), because that's where the real problem comes in. Frame generation from nvidia requires optical flow, AMD does it similarly to FSR and Intel uses AI with a systolic array. Intel and nvidia both generally have quality advantages, but like with everything FSR upscaling it depends as some implementations are much better than others.

My thoughts are mostly that if there was a standard driving it game developers could just implement the technology how they see fit and run it on anything. I do understand this would probably require a combination of Microsoft spending money and Intel/NV/AMD giving up control so there's no chance.

Obviously this is more from an end user standpoint than company as I'm sure they have all developed them this way as a competitive advantage type of situation. After all DLSS FG is limited to 40 series and newer which is only a requirement because of how they developed it. Microsoft doing what they have is realistically the best it's going to get, but proprietary hardware solutions for technology that's only going to get more important feels like a backslide.
 
Intel uses AI with a systolic array.
That's a hardware implementation detail, not an algorithm.

My thoughts are mostly that if there was a standard driving it game developers could just implement the technology how they see fit and run it on anything.
If the upscaling algorithm were left up to game developers, the quality would be all over the place and generally much worse than if the hardware developers or Microsoft and Sony did it. It requires a lot of expertise and quite substantial investments to do well.

proprietary hardware solutions for technology that's only going to get more important feels like a backslide.
It's an implementation detail, kind of like ray tracing. You don't want every game developer to have to implement their own ray tracing engine, do you?

It's kind of like tile-based rendering. There's an awful lot that GPUs do under the hood, in order to optimize performance as much as possible. The game developers shouldn't usually need to care - and they usually don't - so long as the API is well-designed. The only thing that's really different about upscaling and frame interpolation is how the implementation can affect output quality. However, so long as users have the flexibility to switch it off or control how much interpolation it does, then they have a means of dealing with situations it handles poorly.

I can see how the quality dimension complicates GPU comparison & selection, but again we've been here before with things like texture compression and VRS.
 
  • Like
Reactions: thestryker
That's a hardware implementation detail, not an algorithm.
It's a far more generic hardware though as that's the only capability required to operate. It's just not possible to do it with what's publicly available for AMD/NV.
It's an implementation detail, kind of like ray tracing. You don't want every game developer to have to implement their own ray tracing engine, do you?
The reason they don't have to now is because DXR exists otherwise they'd be using vendor specific implementations.

(I'm not well versed enough in the behind the scenes to know whether or not the same would be possible for upscaling/FG)
 
Last edited:
(I'm not well versed enough in the behind the scenes to know whether or not the same would be possible for upscaling/FG)
Towards the end, Cerny's presentation goes into some detail about the amount of depth Sony went into with the hardware, in order to implement the upscaling technology they had chosen. It required substantial design changes to the GPU shader cores, themselves.

I'd love to see a quality comparison between DLSS (without framegen) and PSSR. Sony had to imbue the PS5 Pro with AI inferencing performance equivalent to a RTX 4070 Ti (non-sparse). Yet the raster and fp32 compute performance of its GPU is still in the ballpark of a RTX 4060 or 4060 Ti.
 
  • Like
Reactions: thestryker
I guess I'll have to watch it then because everything you've said already seemed interesting and this is like the cherry on top.
I clearly felt it was worth the time, and I rarely ever watch anything on Youtube. It's like 99% a tech talk, with virtually no marketing spin at all.

I did watch it on my phone, so that I could do chores while listening and only glanced at the screen when it sounded like there might be something to see. From what I recall, the slides generally weren't too informative. You could probably just listen to the audio and miss nothing.

P.S. Cerny first made his name by developing the game Marble Madness, back in the 1980's. At the time, I think its graphics were pretty ground-breaking. It's actually pretty cool to read about the the underlying tech and all the tricks needed to pull it off!

Marblemadnessscreenshot.png


If you're interested in playing a modern take on the genre, there's an open source game called Trackballs that's included with most linux distros out there. I recommend using either a real trackball or the arrow keys. It doesn't work well with a mouse.
 
Last edited:
  • Like
Reactions: thestryker