AMD CPU speculation... and expert conjecture

Page 124 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
I guess having a much larger pool of memory, while being accessed many times quicker, while also allowing for a lessor usage points to something.
The consoles and HSA, while having a larger memory pool, having more cores, better gfx, within 3 years, certain facets of todays PCs will suffer, and the low end will be raised.
When everything is aware of what the other ISA is doing, and has better, faster communication, using much less resources, while having again, much more, the OS will be busy.
Some gpus already are using pointers to a degree, it will be interesting not only using/having them, but also incorporating the cpu and its pointers as well, with both aware.
Cant wait.
 


Wait, we agree again?!?!? :D

As a general rule for sure, but on console they may resort to some manual tuning to get the best performance.

Normally yes. But given how the arch is now X86 AND grossly overpowered, I suspect you won't see devs as willing to go down to that level. For the PS3, you had to (mainly because keeping the SPE's busy was such a chore), and the older single-core consoles REALLY benefited from making sure threads got assigned in some set order. But for PC's, you don't care, because the OS does it for you.

In short, I'm arguing that by making the specs TOO good, you are going to get much less efficient code, and thus lower performance then expected. I'm worried you are going to see PC code for console games, and a general loss of performance as a result.

And BTW, if anyone thinks they are smarter then the Windows scheduler (I can guess two posters who probably think so), I challenge you to make a non-trivial, multithreaded app, and run it once without manual thread management, and once with, and show the numbers for both (assuming it doesn't crash in the second case, which I suspect it would).


OK, and seriously, as an aside? I despise the way quotes are handled. Really, its horrible when trying to extract a particular quote from a multi-quote block. Seriously, go back to not having quotes quoted by default, please.
 

8350rocks

Distinguished


I think what you're both missing is that consoles can be coded completely differently from PCs, down almost the bare metal. Those designers/coders can design a program to utilize cores specifically for tasks where it helps with the objective of the program.

If you have 8 cores available to you, you know this, and they are dedicated to the purpose of your program only...why wouldn't you code your program in such a way that it maximizes the dedicated resources? For starters, it makes your program more efficient if you can run it highly threaded, and the resources are there for it to use. So, I could see your argument in a PC environment, where you never know what the end user will possess in terms of hardware. When it comes to consoles, however, I don't see your argument at all.

The guaranteed denominator is 8 for the XBOX720 and PS4...so why on earth wouldn't you code for 8 cores to be utilized effectively? Also, between AMD 8 cores and intel quads with HTT, you could basically guarantee that a good portion of your PC ports would be played on equivalent (enough) resources as well. The systems with quad cores will just run your program less effectively, and the systems with dual cores will be able to run minimum settings with no AA as long as they have a GPU that can compensate for their lack of a CPU with the same resources.

That sounds a whole lot like high end games today doesn't it? I wonder why that is...
 

8350rocks

Distinguished


Honestly, of all the arguments presented, your argument about laziness from PC ports to consoles makes some sense. I doubt that the console ports will experience similar tribulations because console developers are typically used to coding to that low level of hardware.

In the PC industry though, your argument holds some water...companies that don't design mostly console games could fall into that rut.
 

8350rocks

Distinguished


You do realize Sony, specifically, has stated they have a separate dedicated CPU for running the OS, correct? If not, the posts and links with confirmation are just above you.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


I think they didn't have a choice. They had to go to more cores because of the power requirements. The GPU was allocated much more power than the CPU side, which forced them to use slower clocked low end cores.

To make up for that they're leveraging HSA. Which is cool but it does take away rendering power.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


I've read all those links and I still don't see what makes you think that. This secondary CPU is a weak coprocessor with some power management capability. A coprocessor doesn't run your main OS. The APU is what has the dedicated high bandwidth interface to the GDDR5 memory and GPU.

It wouldn't make any sense to have a coprocessor handling requests for memory allocation/de-allocation and all other kinds of kernel operations. The latency would just kill the performance and break the hUMA model they're using.

http://en.wikipedia.org/wiki/Kernel_%28computing%29

 

jdwii

Splendid




I honestly did not see it and when i google it i can't seem to find a post stating from Sony themselves. Then again i could be wrong but this still is unknown with the 720(or whatever horrible name they make for that always online console)
 

jdwii

Splendid


Actually i'm afraid that will go away because of the 720/PS4 closeness to the PC design really is.


 

probably because the background tasks they want to run is intensive and they don't want games to have extremely variable frame rates. continuously capturing video and compressing it plus what ever else is happening for the OS seems like a lot of work even if there is separate silicon for compression.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


Well they gotta try and do something to "revive the PC industry". If the sales keep stagnating they'll really slow down on the development. 10%/year improvement is bad enough as it is.

The tablet craze needs to die a painful death.
 

i never have and still don't buy intel's boasting about 2x/3x igpu performance improvement. however, this time they're comparing mobile vs mobile and hd4000 as a baseline instead of comparing to desktop hd2000/2500 igpus (in the past hypes). i think that they finally designed a scalable igpu with gt3/gt3e.
how they perform on their own or measure up to amd's current (and may be future) igpus is a whole other discussion and should only be addressed after they come out and get benchmarked, with good drivers (drivers make significant difference).

in a year or two, tablet 'craze' will stabilize with odms saturating arm-based tablets in every price points and markets and slowly becoming incapable of providing a big enough reason to buy a new one. they'll resort to content deals and distribution... basically much more emphasis on media consumption... kinda like amazon's. x86 will have a good chance to fight for markeshare at that time. current windows 8, atoms are inadequate. ms won't be able to undercut android because their core business is selling os as nothing can undercut 'free' or 'low price'. unfortunately for intel, it looks like x86 tablet war will be led by jaguar, not atoms.
 
Normally yes. But given how the arch is now X86 AND grossly overpowered, I suspect you won't see devs as willing to go down to that level. For the PS3, you had to (mainly because keeping the SPE's busy was such a chore), and the older single-core consoles REALLY benefited from making sure threads got assigned in some set order. But for PC's, you don't care, because the OS does it for you.

In the PC / commodity computing world this is true. In the console / extremely integrated world this is not true. Consoles derive their performance advantage by removing many layers of abstraction between the hardware and the application wanting to use that hardware. Developers can then leverage their direct access to non-abstracted hardware to eek out more performance and optimize their code for that particular profile.

Does nobody learn ASM anymore?
 


I don't quite get the baseline those slides were refering to, they failed to give scores which is odd in its own right but despite that at least Intel has a iGPU solution albeit only at the Iris level. Its nearly a year now since Trinity and Intel have been targeting it as the criterion to beat, AMD have themselves stated that its not a very high benchmark as they term Trinity as a infant in iGPU performance. The real question with Iris and the rumors I am hearing as to costing is that a buyer of any GT3 part will need to make very certain that integrated is the way they are going as the pricing is not very cheap, compared to the 4770K its a bit of a gut check.


While impressive GT3 does make the biggest improvements based on clock speed, 800mhz bumped to 1300mhz thats a significantly higher clockspeed than any AMD part is expected to feature for a while, Richland achieved 22-40% on 44mhz bump, Kaveri is rumored to only clock out at 900mhz with around 600 shaders on the top part. If it is clockspeed and dedicated caching these will add to power draw and ultimately is not sustainable going forward. With rumours of Broadwell being scrapped Haswell may have to last until mid 2015 and may look very outdated by the end of 2013, if not by Richland.

 
so, is this the real reason amd's stocks are up suddenly?
"Intel Could Buy AMD as x86 Microprocessors Lose Share to ARM – Analysts.
AMD’s Shares Are Up on Intel Buyout Rumour"
http://www.xbitlabs.com/news/cpu/display/20130501223936_Intel_Could_Buy_AMD_as_x86_Microprocessors_Lose_Share_to_ARM_Analysts.html
i was wondering whom RR was polishing amd up for.... :ange: :sol: ;) :pt1cable:

AMD Establishes Semi-Custom Business Unit to Create Tailored Products with Customer-Specific IP.
http://www.xbitlabs.com/news/other/display/20130501231534_AMD_Establishes_Semi_Custom_Business_Unit_to_Create_Tailored_Products_with_Customer_Specific_IP.html

*picks up a bowl of popcorn and waits for amd fanboys to implode*

edit: personally, i think intel should buy nvidia and appoint the ceo to it's gfx (consumer, server and professional) division as some kind of cto. but i guess amd is much cheaper and they don't have to pay glofo (timing is near-perfect).... :lol:
 


FTC wouldn't let it happen. As it stands Intel is dangerously close to being forcibly broken up with how large they've become market wise. Them attempting to consolidate the entire consumer and commodity server industry into their company pretty much screams *monopoly*.
 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810
If AMD and Intel merge, Nvidia will be forced to close shop , or merge with some BIGGIE, like Sammy or QC.
Though i dont think that Intel can legally buy AMD, without facing antitrust.
 
It will not happen, AMD's position and stocks is ralated to brand stability and a product line. The APU represents the most sophisticated piece of technology and its evolving well from the Llano days. AMD stopped the bleeding through 2012 and the outlook was that AMD were going to make moves to more normality. If the product line stays intact and delivers then the stocks will rise more. Such is how it works with a brand, deliver and consumer and investor interest grows.
 
Status
Not open for further replies.