PlayStation 4 Announced: PC-based AMD Hardware Inside

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]Onihikage[/nom]Something to think about - if both the Nextbox and PS4 will be running on all AMD hardware, does that mean multiplatform games might run better on AMD-powered PCs vs Intel/Nvidia? I know Intel generally has superior performance to AMD in the CPU biz, and Nvidia pulls ahead slightly in the GPU field, but I'm a curious guy.[/citation]

In raw performance, except for the extremely expensive Titan, AMD managed to inch out Nvidia in gaming performance this generation.

Games were already getting more forgiving of AMD's many-cores designs even without consoles. Consoles may speed it along for once, but it's happening either way.
 
[citation][nom]TheBigTroll[/nom]for about 2teraflops on the GPU, id say we are getting 7850-7870 performance in terms of speed. pretty good[/citation]I am quite happy to see the GPU as long as it is anything above 7770. But the problem we should worry is the 8 core tablet CPU @ only 1.6GHz = it could be about the speed of 1.1GHz 8 core trinity.
 
Sweet, that really IS headphone jack on the controller! When I first saw the leaked pics I thought it might have been a prototype controller with a digital 3.5mm jack for devs to use for data input/output.
 
[citation][nom]bluestar2k11[/nom]10x the performance?Someone has high hopes.If only 10x was possible.Maybe 2x the performance, at best, but not even close to 10x, otherwise the PS3 would run circles around any desktop and the xbox.[/citation]

In this article from 2011, AMD agrees the 10x performance advantage is possible depending on the situation partly because of the handicap layer that DirectX creates. Certain functions sent straight to metal, which isn't practical on PC's due to hardware variations, can be 10-100x faster than going through DirectX/Opengl:

http://www.bit-tech.net/hardware/graphics/2011/03/16/farewell-to-directx/2

"If you want each of your draw calls to be a bit different, then you can't get over about 2-3,000 draw calls typically - and certainly a maximum amount of 5,000. Games developers definitely have a need for that. Console games often use 10-20,000 draw calls per frame, and that's an easier way to let the artist's vision shine through."


20,000 is 10x faster than 2000 if you're mathematically challenged.
 
It seems like you're looking for an absolute best case scenario considering the numbers that are in your link.You can go from 2000 to 20000, but going from say 3000 to 10000 would then be the worst case scenario going by those numbers and its only a little over three times more. Furthermore, your link says that it can be up to around 5000 on PCs making the difference between consoles and PCs in such a situation much, much smaller than that huge 10x difference. I also don't know where you extended that 10 tens difference up to 100 times since I can't find any info on that number in your link.
 
Stop acting like an illiterate retard, unless you actually are one. No where did I say or even imply that the 10x performance gap was universal under any condition. As for the 10x-100x comment. Google is your friend.

This article was also from 2011, so it was comparing a 5 year old console to the best PC has to offer. Lord knows, the beat down in these same performance measures the PS4 is going to put on the best a PC can offer at the end of the year, which will likely be the $1000 Titan.
 
[citation][nom]TheBigTroll[/nom]for about 2teraflops on the GPU, id say we are getting 7850-7870 performance in terms of speed. pretty good[/citation]
But can it run Crysis 3 on ultimate settings? Nope. You'll need at least a GTX 680 for that.
 
[citation][nom]JOSHSKORN[/nom]But can it run Crysis 3 on ultimate settings? Nope. You'll need at least a GTX 680 for that.[/citation]

Considering it'd be console-optimized, it probably could run Crysis 3 about as well as or maybe even better than a GTX 680.
 
[citation][nom]tomfreak[/nom]I am quite happy to see the GPU as long as it is anything above 7770. But the problem we should worry is the 8 core tablet CPU @ only 1.6GHz = it could be about the speed of 1.1GHz 8 core trinity.[/citation]
The real difference between a "real" CPU and a tablet/phone CPU is the focus on absolute lowest electrical power possible at the expense of raw processing power. AMD's cores here likely have much better IPC and overall throughput than ARM CPUs. Though Intel's latest mobile x86 CPUs are blurring the line by outperforming most popular ARM chips on both throughput and power efficiency.

Since Sony is going with a custom AMD APU, the main reason for the slow CPU clock likely is the GPU-oriented (low clock but massively parallel) lithography process. Intel's Xeon Phi also trades lower clock speeds for massive parallelism.

There is no miracle work-around, mass-produced massively parallel chips require processes tuned for massive transistor count and those usually come with lower clocks.
 
Should go on MAXPC and read up on the console nerds talking smack about how glad that the PC nerds are dying and that the 8-core is the newest thing...made me laugh a tear out as they think that the 8-core is the newest thing out and that PCs dont have them. They can still enjoy their 8-24 player maps, I'm going to hop back onto my 64-player maps. Also, why is there an LED on the damn controller, I'm going to be staring at the screen, not the TV....thats as bad as the kinect racing games telling you to look to the side as you're driving.
 
[citation][nom]InvalidError[/nom]The real difference between a "real" CPU and a tablet/phone CPU is the focus on absolute lowest electrical power possible at the expense of raw processing power. AMD's cores here likely have much better IPC and overall throughput than ARM CPUs. Though Intel's latest mobile x86 CPUs are blurring the line by outperforming most popular ARM chips on both throughput and power efficiency.Since Sony is going with a custom AMD APU, the main reason for the slow CPU clock likely is the GPU-oriented (low clock but massively parallel) lithography process. Intel's Xeon Phi also trades lower clock speeds for massive parallelism.There is no miracle work-around, mass-produced massively parallel chips require processes tuned for massive transistor count and those usually come with lower clocks.[/citation]The IPC of the jaguar core are said to be 15% better than bobcat only. So u are looking at performance close to ~15% more than a 1.6GHz quad core bobcat ~ 1.1-1.3GHz quad core desktop CPU like core 2 quad.
 
I'm not technical wizard by any stretch, but what I understood from that is basically, it's a trade off at any given point in a console's life span. It has significant power, but it comes with maturity and developer experience rather then by default.

In the early years developers use API calls to make a game faster then from scratch while they learn the system their working with, in which case it isn't much faster then the average gaming PC, because it goes through the same abstraction layers.

Later on in the life cycle, performance picks up as developers learn to code direct to metal rather then through the API, increasing speed as you said.

The problem I see here, is that while it may indeed give the performance you said, the counter element is that in that same time frame it took them to master direct to metal on the console to gain that performance, PC performance also gained quite a bit of processing power, and it takes a few years to master the process.

My understanding is the xbox was more graphically powerful then the PC at the time because of the low overhead and less software layers, but after a few years, fell behind because it can't upgrade like a Pc can, and PC's gained rapidly during that time frame, more CPU and GPU power, along with memory bandwidth, surpassing the system. As the console matured to the point of greater graphics output, Pc matured as much, but the console hit a wall years later, that PC did not.

Even though those performance gains maybe true in ideal programming conditions, it isn't always true enough to gain a flat 10x performance across the board. And even when it does become true, you have hardware limitations restricting just how far you can go. It may render 10x faster, but once you hit the ceiling of the GPU's output, it doesn't matter anymore, every chip can only do so much per second. Which is why, consoles may render faster but they can't render enough to exceed 720p except in less demanding titles, and my understanding is they can't use MSAA either.

It does have advantages though. Perhaps someday soon we'll see the end of the API's in windows, linux etc.. and tremendous gains will be made on PC as he stated, I'll be curious to see what my PC is capable of when they comes true.

[citation][nom]kinggremlin[/nom]In this article from 2011, AMD agrees the 10x performance advantage is possible depending on the situation partly because of the handicap layer that DirectX creates. [/citation]
 
[citation][nom]tomfreak[/nom]The IPC of the jaguar core are said to be 15% better than bobcat only. So u are looking at performance close to ~15% more than a 1.6GHz quad core bobcat ~ 1.1-1.3GHz quad core desktop CPU like core 2 quad.[/citation]

If that is true that is horrible.
 
And I'm also not really impressed by the specs. Only 1.84 TFLOPS for the GPU?

Yes it will be optimized but we already seen how far that can go, hint it's not far. PCs still could run at higher res, more FPS, higher texture detail, then consoles could.

The 660 ti already nearly has 1 tflop higher performance. And that is the midrange card from this generation of GPUs, never mind the next ones.

It seems the only way to have consoles not hold back gaming as a whole is for them to just die out.

Hopefully the stupid decisions, like streaming only "backwards compatibility" will do that.
 
[citation][nom]aznjoka[/nom]I feel bad for nvidia both xbox and ps3 with AMD GPU's = AMD optimized Games. So it pretty much concludes that most games will run best with AMD.[/citation]
not really, the consoles wond use direct x, they will use direct hardware access. PC is limited because it has to use direct x, so it cn run on more than one type of video card, not to mention different models and generations of each brand of graphics card. so it doesn't conclude that it will run better on amd hardware at all.
 
Jaguar is more of an efficiency boost than an IPC boost over Bobcat, however let's bear in mind that Bobcat was "90% of K8 performance" and Jaguar should exceed that handily, even without taking into account all the extra ISAs it supports over Bobcat (AES, SSE4.x, AVX etc.) and the significantly improved FPU. It will also feature far better memory controllers as well as a huge bandwidth advantage over any other implementation of Jaguar.

The console itself has a stupidly powerful (for a console) GPU and I can't imagine that developers will hold off trying to pass as much work to the GPU as possible... as it should be. This is what AMD has been arguing for.

There's no word of whether this Jaguar MCM has a turbo implementation, but to access the shared L2, all cores have to be the same speed. This sounds like an issue, however as each core can be individually put to sleep, what's to stop the remaining core(s) being ramped up? Confirmation on the CPU's finer details cannot come soon enough - are there any ARM Cortex A5 cores thrown in there for security purposes, for example.
 
please tell me it supports above 1080p, it wont be long until we have 4k tvs in our houses and then it will look crappy again
 
Hmm I smell a HACK coming up on the PS4, It is screaming for it!

Full x86. ohh let the mods begin!
 
[citation][nom]Onihikage[/nom]The point of a console. You have missed it. Developing games with full access to a single known hardware configuration grants more than 10x the performance than said hardware would run a PC game with. PC games & applications only interact on a software level in order to have compatibility with hundreds of potential hardware configurations.[/citation]

So you claim for instance HAL (Hardware abstract layer) eats 90% of the under laying performance (hint - it's less than 10% if you know how to code) - So a hint - Get better weed mate so the assumptions don't come from the land of the unicorns!

You do got a point that a fixed system setup have advantages to developers in terms of optimizations and to have a set target hardware when the software development cycle begins. But saying the performance gain is 10x is so off the chart even believing in green litte men on mars can be considered to be on a same sanity level.
 
Status
Not open for further replies.