AMD CPU speculation... and expert conjecture

Page 55 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810
HD7770, eh ? Now i know which graphic card will replace my gtx580.

With so much multitasking and stuff, Xbox has a modern OS. Meaning high level API. Meaning it should be easier to port games, or create a 'third party' emulator. :D
Any chance MS will provide a software layer in win8/win blue that can run xbox stuff ?
 


Fine, then check the individual .dll's. No reason you can't do that [since a .dll is basically an .exe, minus the "main()" method].

I'm going to say it one last time: 95% of all developers use some form of MSVC, almost exclusively. If you need cross platform, then we typically use GCC for the final release (though we still tend to use MSVC, due to its much stronger debug capabilities. Might change if LLVM/CLANG advances a bit more...)
 

More or less in line with my expectations on the GPU side.

The 8GB of RAM is more or less explained now: Task switching. EG: Switch seamlessly between games. So I'd wager there would be support either for 2 games at a time (4GB RAM available per game) or 4 (2GB RAM per game), minus some overhead for the OS (256MB or so, assuming the Windows Kernel is somewhat streamlined).

While significantly more powerful then the 360's GPU (1950XT with some 2000 series elements), I'm worried games will be coded similar to how they are for PC's, at a MUCH higher level, leading to significantly less optimized games, leading to the GPU going obsolete a lot faster.

And for those interested, the PS4 seems to have more or less the same specs (slightly more powerful, slightly easier to code for, by all reports).
 

truegenius

Distinguished
BANNED
8 core cpu!

so does anyone think if old 6 core thuban can beat dual core i3 series in gaming (toms gaming cpu chart) and can make it to atleast 1 level above i3 (currently the lie in same level) ?

also, does it means that newer games will use more core effectively/efficiently ?
 

BeastLeeX

Distinguished
Dec 13, 2011
431
0
18,810



Your second question is always a heated argument, so only time will tell, I linked a hardware canucks review where in Deus Ex the more core the better the game ran, the lower clocked 1100t was beating the higher clocked 980, and the FX-8350 was with the i5 3570k.
 
Four cores which are utilized effectively across all titles for both manufacturers should be the modus. Obviously that is perfect world and it doesn't happen, very few games utilize for cores well, then you have a game that is better optimized to this while the other isn't, universalizing performance should be the status quo. Not saying that processors game the same but they maximise core usage with little wastage.

That requires the line to be towed on both sides, perhaps is just developers that need to play catch up.
 
Well from the other stand point, life is not just about gaming the more lucritive markets are those where the end consumer benefits from more cores in such a CPU is not designed with the premise in mind of gaming, so its about the fine lines.

All things said CPU's are probably not being utilized properly for lower thread counts, as above everyone needs to tow the line.
 


It is. And this has been known for about 30 years now. There is simply too many processing elements which have to be done, in order, to make games easy to thread in a manner that results in a performance benefit. The parts of the game that are parallel (rendering and advanced physics) are already offloaded to the GPU, leaving the CPU with little more to do except run the main game engine. Hence why most games start to see less scaling after the second core, and almost no scaling after 4: One thread handles the main game engine, one handles rasterization. Any remaining threads are SIGNIFICANTLY less workload, so they don't register big numbers in Task Manager. [In a 50 thread game, I'd expect 2-3 threads, at most, to account for 99% of the workload].

Secondly, going out of your way to balance core usage is silly and performance wasting, so to some extent, you are always going to see some degree of uneven core usage, even in tasks that do scale well.

Third, Task Manager has a maximum resolution of 1 second per sample. Which begs a theoretical question: If you have 1 thread that runs for 1 second total over 1 second, but jumps cores four times over that period, you get a Task Manager usage graph that could look like (assuming no other processes intensive threads are running):

Core0: ~40%
Core1: ~35%
Core2: ~20%
Core3: ~5%

Oh wait, I've described just about every game released in the past 5 years or so...Hence why I am always skeptical of Task Manager numbers, as resolution is a significant issue. [Same basic argument I have against FPS as a gaming benchmark]. Unless you know how many threads are actually run, which ones do significant amount of work, and what cores they are dispatched to, Task Manager usage graphs tell you NOTHING.

Tools like GPUView, which shows you this information, allows you to see, at least, how many process heavy threads are being run, how often they run, how much work they do, and what cores they get dispatched to. Very few people have used this in any real way for games yet, though if I can get the SDK to properly install, I might do some investigation in this area in the future...But just a few quick examples (Since noob won't let the SC2 argument die a quite death):

http://graphics.stanford.edu/~mdfisher/GPUView.html

GPUViewLargeRenderTime.png


Note SC2 does the majority of its work on two threads (light workload threads aren't displayed by default). But notice how many times the thread jumps cores (different color = different core), which is to be expected to some extent (The OS can come and kick out a user thread anytime it wants to). And note the second thread doesn't begin until later in the processing cycle (probably the render thread, based on the fact the render entry is d3d9.dll). I would imagine Task Manager would looks pretty close to what I described above: Two threads loading three cores to some extent. Hence why I don't view Task Manager of a real good indication of threading within an application.

BTW, this is WOW, which is a LOT worse in this regard:

GPUViewWoWAllThreads.png


Probably due to a CPU bottleneck, but still, one thread doing 99% of the work...[but again, without knowing the HW, kinda hard to know whats REALLY going on].

---------------------------------

My point is basically this: Our way of evaluating games (FPS, Task Manager graphs, etc) is hopelessly flawed, and I'm glad sites are starting to move in the right direction in regards to testing (looking at frame-by-frame latencies, GPUView usage statistics, etc).

And yes, I went overboard on this one. I know. (REALLY slow day at work today). I'll do some testing if I can get GPUView and the Windows SDK to install over the weekend; anyone have any games they want me to look at? (I'm probably going to do L4D2 as a test, to see how many threads are REALLY doing work. Task Manger would indicate four or more; I'm less convinced...)

If its so parallel threading happy, y u no offload on the GPU ? Will save thread synch headache for the programmer.

Three reasons:

1) The GPU is already burdened with Rendering (and in some cases, physics). Giving the GPU more work would slow the entire process via a GPU bottleneck.
2) GPU shaders are relatively weak compared to a CPU core, so for non-scaling tasks, performance is SIGNIFICANTLY weaker.
3) API's that give visibility to the GPU resources are not widely used: CUDA is NVIDIA specific (even if it IS a powerful API, it won't be used for commercial software is another path needs to be made for non-NVIDIA GPU's), OpenCL is not widely adopted, etc.
 

truegenius

Distinguished
BANNED
to monitor cpu usage you can try hwinfo32/64

in hwinfo32/64 we can change scan interval to as low as 100ms and it records many entries like voltages , ram usage, vram usage, individual core usage etc, it also supports graph and logging for each entry, clean and user friendly interface
and freeware ;)

realy awesome monitoring tool
(it is sensor monitoring tool only it does not show threads)
 

m32

Honorable
Apr 15, 2012
387
0
10,810



Your going to go crazy explaining how games work/threaded/port/made/etc. The only thing we want to see is our gaming machines use all cores, so if it can only use 8 cores decent, then use my 16 cores poorly. It doesn't matter if all cores aren't getting push to the max..... MOAR CORES = BETTER EXPERIENCE!!!!!!!!! (JK, but you know we all think it) :pt1cable:
 
no until games are optimized for more than 4 cores, ur computer only gets the game performance of 4cores if the game only is optimized fully for 4. its all about coding games for multi-threaded slash core usage.

i got a 8350 so i can multitask like a crazy man, but thats me.

 


Yeah, lowering the interval would help, but seeing where the threads are dispatched is the be all end all. No way to do that without hooking into the Windows SDK though...



I know that, of course. Won't stop me from arguing the point though...

no until games are optimized for more than 4 cores, ur computer only gets the game performance of 4cores if the game only is optimized fully for 4. its all about coding games for multi-threaded slash core usage.

Again, developers don't "optimize" for ANY number of cores; we create threads, and let Windows schedule them to cores. That's it. If Windows thinks it gets the most performance by putting 100 threads on the same core, guess what? Thats exactly whats going to happen.
 

lilcinw

Distinguished
Jan 25, 2011
833
0
19,010
Actually that says that a reported tablet controller was a hoax. The 8-core rumor came from multiple different sources apart from the one claiming to know about the 'X-Surface'.
 
if its impossible to implement threads spread across cores. How come quad cores are better for gaming than dual cores now and wasn't the case 4 years ago?

And why is it that some games get half the fps when running a dual core compared to a quad core? Isn't it impossible to thread the game for more cores?
 
Honestly multiple core gaming systems are pointless. I grew up and stopped gaming so most workloads my CPU goes through it is designed to operate on the principle that more cores are better and hence why all my high threaded chips perform these functions extremely well.

With Consoles performing their jobs well and with the new generation almost mind blowing the purpose of a gaming PC is becoming less pertinent, AMD and Intel are not interested in making gaming priority CPU's, the focus is efficient general purpose. If you want a game to scale well at 120hz and look impressive get a xbox 720 and a 76" home theatre system.
 
Got to see a A10 6800K today, just wow it may seem a cosmetic performance improvement but the better IMC prioritization and tweaked Cache latency makes a world of difference on just modest additions. Its a worthwhile update to a low cost platform.


I was thinking about this after yesterdays article on toms with the gaming and that fud, but AMD needs to drop the FX 4000 SKU's, which really have no place. The APU's practically deliver better general purpose performance, gaming discrete is about the same and the FX6300 is like $5 more than the FX4300 which is just not a good processor and represents shocking value for money. AMD please remove this generation SKU, there is not reason for it.
 

jdwii

Splendid



That was most possibly the worst and most retarded thing said on this forum and coming from you its just not right. Gaming PC's will and always will be the best and a xbox 720 is not going to be able to play any game at 4K 120hz resolutions without everything on special low settings. Movies maybe but not games.
 

m32

Honorable
Apr 15, 2012
387
0
10,810



So if AMD is making an FX8000 series and it fail at the 8000 and 6000 series, what do you want AMD to do? I would prefer them sell it if it meets the 4000 series specs. I think it should be at an lesser price so the A10 and FX4000 series are different in peoples' minds. I think more people would upgrade to 32nm tech also (yes, your Phenom is old), but I don't know exactly how much it cost to produce the chips.... but it would more than just throwing it away/recycling.
 


It requires a completely different thought process during the design phase. Humans tend to think serially so when we map out logic algorithms (the part of code that does the heavy lifting) we tend to do them in a serial fashion. Question C can not be answered until both questions A and B are answered, so on and so forth. We unintentionally create bottlenecks this way by our very nature. In order to really multitask code you need to start at the design phase before you've pounded a single key and ensure that every step and every interaction was done in a method that could be done in parallel to other methods. It takes a lot of work to consciously avoid creating bottlenecks where all your work is being done by a single thread or you have a multitude of threads all stalled waiting for a single thread to respond. It's not lazy coders at all, it's just really difficult for people to bend their brains that way.
 
Status
Not open for further replies.