AMD CPU speculation... and expert conjecture

Page 45 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860

the thing about that is you have to go to medium settings just to see the advantage of HTT.

http://www.sweclockers.com/artikel/14650-prestandaanalys-battlefield-3/5

2507


The thing is there have been a lot of updates since this review, both in the game and with windows scheduling. I'm regularly pushing 80+ fps, minimum ~60 with the 8120 @4.7 ghz with identical settings, 4xaa, 16x af, 1900x1200 ultra. The biggest thing to note is that medium settings can't even save a dual core cpu.

Here is some intersting tests with the "gpu bottleneck" of metro 2033.
11qsi6x.png
21ngth4.png


Radeon 6970 : 33.36 fps, 1 core heavily loaded, 3 cores ~50%. ------------ ----- Radeon 6970 CF: 65.32 fps, 4 cores heavily loaded, 2 cores ~50%.

So metro scales with gpu power correct?

Just for kicks, lets see how a dual core system would work, non shared bd modules.

2ebcwv9.png
34sfllk.png


Radeon 6970 single card : 27.10 fps ----------------------------------- Radeon 6970 CF: 37.89

Conclusion: Metro 2033 is not GPU bound in all situations, but can utilize whatever its given. you couldn't make me run a dual core system with high end graphics.

So, what is it with the way metro 2033 is written that allows it to scale well past 2 cores? This is not answered with the multiplayer reason.
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780
Everyone who is looking at CPU usage per core is looking at things backwards.

Most of the games we get are console ports on PC. Xbox 360 has 3 cores. Why would any game developer design a game for console first that uses more cores than the hardware has available?

And we've all seen how low effort a lot of PC ports can be. If companies do not change their menus from being controller optimized and are leaving out in game graphics options, why do you think they would go back and re-write code to be multi-threaded for PC?

We don't see games use a lot of threads because Xbox 360 can't use a lot of threads.

http://www.gtaforums.com/index.php?showtopic=424226

GTA4 was constantly called a shitty PC port by PC gamers, and look at its CPU usage. Scaling stops after 3 cores. Xbox 360 has 3 cores. Do you think that is coincidence?

If PS4 and the new Xbox have 8 cores, I can guarantee you we will see game engines start to use 8 threads.

Why do you think there are all these crazy rumors about next gen consoles having ridiculous system bandwidth and integrated GDDR5 main and GPU RAM?

Because the consoles are going to be designed such that the cores can communicate very efficiently over shared memory in RAM.

It would make sense that this is happening, as it's very HSA-like in nature. AMD has said multiple times before their goal is building blocks which can be used like legos. If AMD wants HSA where they can make chips with all sorts of different pieces (I.E. 2 GPU blocks with 2 CPU blocks), they need to make sure *everything* has access to data via the most bandwidth it can get.

Also, as for the 20% to 40% increase, I wouldn't be surprised if that increase *only* came from clock increase and the support for 2133 memory.

AMD can't get people to put good ram in their APU systems. Imagine how good Richland will look when they are comparing Trinity with 1333mhz ram to Richland with 2133mhz ram. It will look like AMD came up with graphics improvements out of no where that blows away the old system, when all it is is a RAM upgrade because OEMS weren't doing it right in the first place.

AMD is starting to execute properly. See their new gaming bundles for reference.
 
Oh, right!

Another important point we didn't bring into the table, gamerk.

The true 64bit exes and libs. I'm sure we're still being constrained by the 2GB barrier imposed for executables. I'm sure if we go into 64bit once and for all, we'll start seeing more threading being done. And why? Because of registers being actually used for 64bit ops + raising the amount of data you can chunk.

Cheers!

EDIT: Typo.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860

The answer is staring you in the face. You can't upgrade the graphics past whats already there, and 3 cores can max it out no problem. testing xbox 360+ gtx 690 ... not going to happen.

More graphics requires more cpu, and thats only available on the pc. xbox 360 = limited graphics settings, wich in turn limits the cpu performance required.

Its much easier to code down, you just eliminate high end settings. So was it coded for the xbox or pc and dumbed down to work on the xbox? The door goes both ways, and you can usually tell which is which.

Skyrim, Dead space, ect is pretty much a console port to the pc, limiting the performance of the game itself.

BF3, metro 2033, ect, are likely (See below) pc games dumbed down for the console.

Door goes both ways, why indeed would a game developer ignore the pc market and the advantages it offes.

As most of you knew before watching the PS3 premiere, EA DICE had already stated that the lead platform of the game will be the PC version.
http://www.product-reviews.net/2011/06/17/battlefield-3-pc-vs-ps3-graphics-comparison-easy-winner/

Software devs are going to love porting from x86 consoles to x86 pcs. will be the same code, deployement will be much simpler.
 
If people remember, games at one time were cpu limited, and became much less so after C2D.
The consoles were done before then, with much lessor cpus.
Look into the progress of cpus since C2D.
Look into the progress of gpus.
The equalizer is resolution and BW.

Having good BW with lots of GDDR on board brings this back to even.
Knowing theres much growth to be had with gpus yet to come, and knowing the last huge jump was C2D, and very likely the last such on cpus means only one thing, more cores.
It wont be easy, but what else can we do?
 
We can write more efficient software. So much commercial software still have legacy things that drags it down for no reason. Also so many people are still using old software. People still cling onto windows XP.
 

i read another rumor suggesting two 4 core jaguar cpus in mcm package, sorta like dual-cpu opterons. :sweat: :sol:
 


For the last bloody time: WE DO NOT CODE SOFTWARE THAT WAY! Core loading is left entirely up to the scheduler on PC's. We create threads, and leave it up to the scheduler to figure out what core to put the thread on. We do not use hardcoded thread logic that assigns specific threads to specific cores because

a: You can not gurantee some other process heavy program is running, and thus can not make any assumptions on the availability of any CPU resources.
b: Because decades of experience has shown that developers are not smarter then the Windows scheduler.

And we've all seen how low effort a lot of PC ports can be. If companies do not change their menus from being controller optimized and are leaving out in game graphics options, why do you think they would go back and re-write code to be multi-threaded for PC?

Games are already using upwards of 80 threads in some cases, and I haven't seen a game using less then 40 in years (based on my game library anyway); you can confirm this easily enough via Task Manager [just add the "threads" column]. So the basis of this argument is wrong.

We don't see games use a lot of threads because Xbox 360 can't use a lot of threads.

The 360 can process more threads at a time then an i5 2500k; 3 cores with 2-way SMT support, for a total of 6 threads. Same with the PS3 [6 functional PPE's for developer use, though memory usage is the main limiting factor for hte PS3]. So again, your entire argument is wrong.

http://www.gtaforums.com/index.php?showtopic=424226

GTA4 was constantly called a shitty PC port by PC gamers, and look at its CPU usage. Scaling stops after 3 cores. Xbox 360 has 3 cores. Do you think that is coincidence?

Yep, for reasons I've already explained, several times, for the past 5 years.

And I note EVERY Rockstar title in existence has had shitty PC ports. GTA IV was hardly the exception to that rule.

If PS4 and the new Xbox have 8 cores, I can guarantee you we will see game engines start to use 8 threads.

They already use a good couple dozen. And even if you mean at the same time, you would still be wrong, again, for reasons I've explained.

Why do you think there are all these crazy rumors about next gen consoles having ridiculous system bandwidth and integrated GDDR5 main and GPU RAM?

Because the consoles are going to be designed such that the cores can communicate very efficiently over shared memory in RAM.

Could be a few reasons actually, the most obvious being to ensure that memory constants aren't going to be a major bottleneck this time around [PS3 especially]. The iGPU that is roumered the PS3 will have could also be why, since AMD's iGPU's tend to scale with memory performance.

I'm waiting on the final/final specs before I make a real read into the console design; we only got confirmation of X86 about a month ago...

Software devs are going to love porting from x86 consoles to x86 pcs. will be the same code, deployement will be much simpler.

Well, it will be easier, but there still will be a LOT of console specific stuff in there. For example, you'll still probably have to rip out the fine-threading logic, because remember, on PC's, you are not the only one running. [Thats the one big advantage of integrated hardware: At any given time, you know EXACTLY what resources are in use, and what resources are free for use. On PC's, you can not make the assumption you have access to any computing resources.] You'll also likely have to rip out the entire memory management system [Though its possible memory could be addressed in a similar way as PC's, but I suspect most devs will still handle allocation manually for consoles, regardless of how much RAM is put in, out of habit.]

But yes, having to rip out every PPC opcode will make porting a LOT easier overall, but theres still going to be sections of code that need re-writes.
 


Not really. While moving to 64-bit and getting free of the 2GB Address Space will likely help in the SCALE of things you can do [more then, say, a dozen enemies at a time, for instance], its not going to have any impact on the amount of threading done.

I also note that while 2GB may be Addressed to the application fairly quickly, the actual amount of RAM in use at any one time is actually quite small. If anything, moving to 64-bit would greatly reduce memory bandwidth as a performance limiter, but I'm not sure you'll see any performance benefit per se.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860

Maybe one of these days windows will come up with core locking. Heavy workload threads get locked to a random core so that smaller workloads in the same program, or other programs all together aren't put there to share resources.

Programmers would still have to tell the os that this particular thread is to be locked alone, and windows puts it on unlocked core x, and nothing else can use that core until unlocked.

Should be easy to implement, its basically just the opposite of core "parking" where nothing is allowed to use that core (usually htt or cmt cores are parked)

Of course tho, this would be disasterous to dual core processors as there isn't enough to lock. Windows could be told not to allow locks on more than 50% of the cores or something.

Aside from that, how do games like skyrim limit the core usage to a maximum of 4 if software is programmed for free range of the os when the os has 8, 12 or more cores(3970x) at its disposal?
 


But here's the issue with that: What happens if the thread stops, possible for an extended period of time (say, a few ms?). Core isn't doing any work now, so is the core free to use? If so, what happens when the thread resumes? Likewise, if you lock the core, taking a quad as an example, every time the thread is doing nothing (say, if its locked by SW lock, or waiting for data to move from the HDD to RAM), you've just eliminated 25% of your total maximum processing capability during that duration.

Not quite as simple as it seems, is it? Perfect example of a "kinda works, in nominal situations" solution, or what I call "doesn't work, even if it gives the appearance of working sometimes" software.

Aside from that, how do games like skyrim limit the core usage to a maximum of 4 if software is programmed for free range of the os when the os has 8, 12 or more cores(3970x) at its disposal?

They don't. If you looked at a program that showed what cores the threads are running on, you'd see that all of them are being used, just not at the same time or not very much. In theory, you *could* use the 'setprocessaffinitymask' attribute to limit the number of cores for the entire process, but I can't think of a reason why you would want to do this...

[I'd REALLY be interested to see review sites start to incorporate something like GPUView into their reviews, just so we can see how many threads are actually doing any real amount of work, and how often they jump cores.]
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860

so your saying programmers won't know what threads to schedule to a locked core? could be the main problem right here if programmers aren't tought what their thread actually does.

Obviously you wouldn't lock light workload threads.
 


More than "knowing" or not, is a philosophical thing: you're supposed to "trust" the OS to know how to properly schedule things. And you're still depending on the OS API to actually access resources in a low level manner. You can read about "user space" for more info on that.

And gamerk, don't dismiss 64bits that easily in multi-threading. You'll have a bigger address space to actually allocate resources for threading. In the calculation part, you'll be able to have more variables in memory at a given time to consume. You have to look at it from that POV. I know it won't be a huge deal, but it will be like the transition from 16bits to 32bits. All in all, it's not a game changer, but it adds to the party, hahaha.

Cheers!
 


I haven't used Windows on my own machines for a while, but isn't pretty well everything in Windows land 64 bits nowdays as even businesses have finally moved from 12-year-old XP to Windows 7 and pretty much all hardware has been 64 bit capable for the better part of a decade?
 


Nope.

It's the same crappy deal we had with Win95 until version C, I think. Maybe it was 98... Don't remember well.

Most libs and code is still 32bits. Just a handful of libs and other code is native 64bits. Same with the libc and kernel deal in GNU Linux. There are wrappers for some things.

It's a necessary evil, but oh well. It's just such a slow transition!

Cheers!
 

BeastLeeX

Distinguished
Dec 13, 2011
431
0
18,810


Where do you see this? It talks about Kaveri, which uses the Steamroller core, but it is an APU. I would love to hear news about Steamroller, got a source?
 
amd already confirmed its only coming out 2014 as earliest, what month thats to be said.

anybody else who says otherwise if full of balony, considering thats an official statement from amd, they can always released it sooner but officially its 2014.

 
Status
Not open for further replies.