Sony: Developers Requested x86 Architecture for PS4

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
@catfishtx I could be wrong, but I *think* thats how every generation of "new, powerful" consoles get started....lots of PC ports because the PC has long since eclipsed the previous gen on consoles, and they need filler titles for the new console to show off at least some of its new power.
 
[citation][nom]palladin9479[/nom]Amiga was WAY ahead of it's time, though it's price prevented it from ever catching on. The Amiga had custom chips for nearly every function, CPU, Video, Audio and I/O along with multiple memory bus's. It was an amazing system for it's era and it's design allowed it to do things nobody else could. Of course all that customization made it hard to code for and too expensive to really be competitive with the IBM clones that started to surface. You can go out and get Amiga Forever and play around with it, kinda amazing for it's time.Now to the commentator mentioning "x40 slower the gpu", no their not. GPU's are just vector processors, meaning their designed to compute vector math. CPU's are integer processors (called scalar), meaning their designed to compute single integer / logic math / instructions. Through internal SIMD coprocessors a generic x86 CPU can compute SIMD vector instructions (SSE/AVX/FMA) which also acts to process the old 80-bit FPU instructions. Vector processors are really REALLY good at doing large amounts of simultaneous array math, they absolutely suck at doing single integer calculations or logic comparisons (IF / OR / XOR / AND). Vector CPUs have been around since the 70's, they are used for different things then scalar CPUs are. What we've found is that both types of processors work best when paired with each other. x86, for as clunky as it is, has proven a very efficient architecture for desktop and commodity computing. And by efficient I mean cheap, it's performance per USD is higher then most other design types which is why it's so prominent in applications where hardware costs dominate.https://en.wikipedia.org/wiki/Vector_processor[/citation]
Nice. Very Nice.
I just imagined you slowly walking away from a warehouse explosion as the explanation was being narrated by Morgan Freeman... most likely i imagined this cause i also just finished reading the latest release of Beelzebub. 😉
 
That ability has been the primary reason the x86 ISA has kept up with the others.

I disagree...I think the primary reason is that with Intel's massive manufacturing and design resources over just about everyone else, they can advance whatever product they choose to support to the front of the pack. Does anybody besides them even have the ability to produce 22nm cpus yet? And i've had my Ivy Bridge since last year, LOL. If Itanium hadnt been such an EPIC failure (ha ha, pun intended) with x86 legacy apps, and had AMD not created x86-64, I have no doubt we'd all be on Itanium chips right now.

It was the golden age of AMD CPUs though---Intel created the flawed Itanium and Netburst CPUs while AMD was dishing out cheap and fast XPs and innovative Athlon64s/Opterons. And yet, because of Intel's manufacturing they were able to keep up with the A64s and XPs for the most part by pushing clocks into the stratosphere to offset the poor IPC of the P4.
 
i hate to say it, but - thank you Microsoft for creating a developer friendly platform that pushed this issue.
Dont get me wrong, i am a die hard PC gamer but, im not sure IF MS is in our (PC gamers) corner. Win8 seems to be pushing for a closed type platform and i do NOT like that.
 
I'm happy for the switch to x86 and everything, but if I'm not mistaken, the PS2 was the best selling console of all time, and the PS3 has outsold the Xbox 360 worldwide. I'm not saying there isn't or shouldn't be sufficient pressure for a more unified, if you can't achieve more efficient, architecture, but I wouldn't place the reasoning squarely on sales. That's a clear misnomer. It comes down to the reality that enough developers complained about cost of development that everyone got together and said, "let's make this all simpler".
 
[citation][nom]blppt[/nom]Don't know why you were downvoted, but you were absolutely right about the PS2/3. It was kinda funny----Sony had massive success with the PSX because it was incredibly easy to make games for---single efficient CPU and single feature laden, powerful Graphics chip. AND they had a good developer toolset. So then what did Sony do? Completely ignore one of the most important reasons they dominated the Saturn (which was a hodgepodge of oddly slapped together silicon, and difficult developer tools), and put out the PS2, which made nothing easy for developers with its odd Graphics Synthesizer + Emotion Engine combo. When utilized to their full potential, the EE + GS could be quite potent. It was just very difficult and time consuming to do so. Same with PS3--a new, odd type of cpu that made its competition (360) look easy to optimize by comparison.At least they've apparently FINALLY learned from their mistakes.[/citation]

Agree,
Before the first xbox, sony's psx did have the console market tied up, makes me wonder if sony thought by making the next platforms so labor intensive to develop for, maybe devs wouldn't bother re-writing for a competing platform. In any case, MS took advantage of that decision regardless of the motives behind it.
 
[citation][nom]digiex[/nom]Rumors: Nintendo going ARM+Nvidia way... jk.[/citation]

might not be a joke, Nintendo might not be able to stay in the mix unless wii-u hits one out of the park. My son has the system, and so far im not impressed, but i have only played a few games on it. Maybe mobile platform is their best option, the future is mobile anyway, but the current battlefield is in the living room and i just do not see a reason to buy a wii-U, over our wii.
I play PC so the wii is used by my wife more than anyone.
 
Sony actually co-developed the PlayStation 4's APU with AMD, so expect some platform-specific surprises. Ofc, major reason to modify its drm... its Sony after all...
 
I posted on some other forum this info already but someone asked "why consoles?"

Well, for one big reason. Lack of DRM software (looking at you Uplay, Origin, etc). Consoles will just work and not have to have crappy 3rd party software installed to just run a game.

Lack of high end graphics sucks but if I never have to install Origin or Uplay, then I'll be happy. So I will continue to play on both and when a game comes out that needs 3rd party installs? I'll just buy the console version instead.
 
[citation][nom]sanityvoid[/nom]Lack of DRM software (looking at you Uplay, Origin, etc). Consoles will just work and not have to have crappy 3rd party software installed to just run a game[/citation]

MS is planning on using account-locking DRM on the next Xbox, which could essentially end used game sales. No, you dont have to personally install any DRM software, but the scary thing is, Microsoft could already be updating its DRM software transparently on the next Xbox, as you have very little control as to what is installed if you want to sign on to Xbox Live (which, the next Xbox will mandate for using games, according to rumors).

As for Steam, its such a small, unobtrusive program that it doesnt bother me to have it running for a given game. Even the much maligned Origin doesnt appear to be much to worry about (I installed it for Mass Effect 3). Now, if these stores keep cropping up with exclusive titles you cant get on any other digital store, then it will start getting annoying having 40 different digital store clients for your game library.
 
[citation][nom]palladin9479[/nom]Itanium (Intel64, EMT64) had it's purposes but it absolutely sucked for general computing. It relied heavily on the compiler doing the branch prediction during compile time to factor in the additional instructions needed for it's VLIW architecture. Compilers can not predict the future, so many of the compiled operations ended up being useless are are discarded during execution. On the other hand, if the code your compiling doesn't contain variable jumps then you can get a lot of performance out of a VLIW setup. It's why VLIW was a good architecture for GPUs.[/citation]

EMT64 was also not Itanium, but one of Intel's early 64 bit extensions to x86 in reaction to AMD's Athlon 64/FX CPUs. EMT64 was common for many of the later Pentium 4 and Pentium D CPUs. As I recall, it may have even been Intel's first first 64 bit extension of x86, or at least nearly the first, to be implemented widely.
 
[citation][nom]mikenygmail[/nom]The way the article stated it made it sound like AMD and SONY were both 50% responsible for the APU, which is ridiculous because AMD alone makes APU's. AMD made the APU and customized it, with some input and minor tech from SONY, for the PS4.[/citation]
Oh that way. I just interpreted co-developed as "helped" in the article. :lol:

[citation][nom]blppt[/nom]I disagree...I think the primary reason is that with Intel's massive manufacturing and design resources over just about everyone else, they can advance whatever product they choose to support to the front of the pack. Does anybody besides them even have the ability to produce 22nm cpus yet? And i've had my Ivy Bridge since last year, LOL. If Itanium hadnt been such an EPIC failure (ha ha, pun intended) with x86 legacy apps, and had AMD not created x86-64, I have no doubt we'd all be on Itanium chips right now. It was the golden age of AMD CPUs though---Intel created the flawed Itanium and Netburst CPUs while AMD was dishing out cheap and fast XPs and innovative Athlon64s/Opterons. And yet, because of Intel's manufacturing they were able to keep up with the A64s and XPs for the most part by pushing clocks into the stratosphere to offset the poor IPC of the P4.[/citation]
Though i think Intel's real turn-around was the tick-tock cycle. They weathered the Athlon64 storm because of, as you said, pushing clocks, and also because of dirty tricks :|

@palladin9479: Thanks! Excellent info :)
 
What exactly will PS4 do for a operating system is what I'm wondering and how long until that gets cracked to work on regular PC's like OS X.
 
[citation][nom]Darkk[/nom]There is nothing wrong using x86 platform as it's been enhanced over the years with additional instruction sets. But the biggest difference here is that the APU/GPU is the one is driving the game not just the CPU.[/citation]
Agree, there is nothing wrong with x86. In fact, if your deal is a lot of computing power, and a large die, then it IS the instruction-platform to be on. Nothing can rival it today. This is because large dies and lots of transistors have made the CISC architecture the boss architecture. And x86 is the last surviving CISC architecture. When computing demands increase on smartphones and tablets, it's going to wipe out ARM there. News update, x86 is more powerful per watt than ARM. ARM is about total power and small dies. Every compiler-centric type of architecture, RISC, EPIC, VLIW, CELL loses to CISC when the scale goes up far enough. Not to mention they have a ton of other drawbacks. x86 will rule the world (if it doesn't already).
 
[citation][nom]ojas[/nom]Ah interesting. Didn't know that (too young back then).And the forums sort of suck. I posted something that didn't post and now i'm in no mood to type it over again.But in short, i think that the fact that even Intel couldn't come up with something viable to replace its own ISA, probably says more about x86 than the replacement.x86 is like QWERTY, has its flaws, but it's probably the best we have, and hard to replace.[/citation]

The x86 is the ultimate, as long as we're interested in more computing power (and we are). It's not that there couldn't have been competition. There was another growable CISC around for a time, 68k. This was Motorola's family, which powered Amiga, Mac, Atari,.. basically everything but the PC. Motorola's board eventually wrongly concluded that the future belonged to RISC, and committed a CPU suicide, going with IBM's Power instead.
There are a number of things going for CISC:
The ISA is formal. Which means the inner workings of the CPU can differ a lot without the software needing to take any consideration. This means a lot of generations, tiers, technologies and manufacturers can co-exist on the same software binaries. Creating a mass market advantage for development efforts for both CPUs and software. It also means the software is reliable. It also means that any crazy way of doing it internally is OK. Software doesn't care. The compiler doesn't care. The CPU is a dark box. This is the way CISC beat RISC. All the supercomputing technologies could and did move into the x86 CPU, and then some. Prefetch, pipelining, superscalarity, out of order, code-fission, vector-processing, code-fusion. And in all this, it came the day when the CPU grew so powerful that one of the main bottlenecks was moving work in and out of the CPU. And this is where all the compiler-centric architectures were truly F**ed.
But there is another reason: And that is a mathematical discovery about complex systems that says the most efficient way, and sometimes only workable way, of handling complexity, is to do it at the lowest possible level, i.e. inside the CPU, NOT in the compiler. Of course, EPIC's magnificent failure had already proved this principle in practice.
 
Status
Not open for further replies.