AMD CPU speculation... and expert conjecture

Page 31 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
As to who gets paid what its as speculative as the speculative thread, but while some see downsizing the more adequate term is restructuring.

The future of AMD certainly looks a whole lot better than 16 months ago and now we have the joy of speculating the great old Steamroller mystery, if I have to take a fly it may still be slower than a new Intel part, how much so I will just wait until something more than fodder comes out.
 

jdwii

Splendid



Its about time Amd learned their lesson. I already knew Global foundries would mess up.
 

butremor

Honorable
Oct 23, 2012
1,563
0
12,160

Oh, i didn't know 6300 supports 1866 right away.
But why 990 mobo and not 970?
 

anxiousinfusion

Distinguished
Jul 1, 2011
1,035
0
19,360


Educate me if I'm wrong, but I thought that the 8350 was incapable of using anything higher than 1866mhz? Is there some kind of work-around that is needed to achieve this?
 

griptwister

Distinguished
Oct 7, 2012
1,437
0
19,460


Motherboard and RAM that is capable. Who ever told you that you can't go higher was incorrect. *edit* Oh, and for two slots of RAM, the MoBo's tend to recognize the 1866 memory, but once you max out the 4 slots, you'll get 1600Mhz and You'll have to OC it yourself. Also, to get over the automatic settings, you'll have to OC yourself.
 



You can very well go with a 970, but at mid level you look for features complimentary to the price group and a lower cost 990FX board offers better features and connectivity to said compliment the chip.

Educate me if I'm wrong, but I thought that the 8350 was incapable of using anything higher than 1866mhz? Is there some kind of work-around that is needed to achieve this?

This is incorrect, the chips native speed is 1866 but doesn't limit you to that. Apart from the Crosshair V and UD7 non of the boards are really supporting of speeds over 2133/2200 so one would have to overclock beyond that point. On the 990FX 2133mhz is the most you will get out native RAM SPD's and the difference over 1866 is a few FPS or time shaved off for very little extra so its worthwhile.

 
That wouldn't be a good investment particularly from scratch, Id take nothing lower than a 890 GTD and thats pushing it, a decent 970 costs $100 without rebate why cheap out anymore on midlevel build considering a 990fX GD80 or UD5 costs around $140, if it was budgeting then fair enough but I included a budget, midlevel and extreme platform for those said reasons, obviously it is flexible enought o work around into a budget suited per individual.
 

truegenius

Distinguished
BANNED
since we are talking about games/benches so here goes my experiment, many games are well threaded but does not use more than 4 cores

experiment
i downclocked my 1090t to 1ghz and then i ran some games and i found in graphs tha cpu usage was near 60% even in cpu bound games like call of duty black ops (loads 3 cores @3.6ghz to full but still does not use more than 4 cores), so i compared the cpu time used by games when i ran my cpu at 1ghz on 6 cores with 1ghz on 4 cores only , i found that cpu time was almost same (tested few games only)

viola new thing to add in reviews of games :D
"multi core efficiency" :p
 


Which is exactly what I would expect, because none of the cores are reaching 100%, you shouldn't be running into any situations where the CPU is bottlenecking you*. In short, as "bad" as the scaling is, you still have PLENTY of headroom to work with.

*Note that the 1 second sampling in TM isn't particularly accurate, but "good enough" for the purposes of this discussion. I'm sure there are a few ms where the CPU is overworked, but you'd be hard pressed to find it via benchmarking.

So the point being, the game scales "well enough" for your CPU, where its not reaching a CPU bottleneck, and where farther threading would not lead to any more performance gains.
 

cgner

Honorable
Aug 26, 2012
461
0
10,810
^That. Porting/base programming is often to blame, not the CPUs. We all remember GTA 4, first Crysis, and Shogun 2. They all wanted juice form CPUs but only used 1-2 cores :/
 

3ogdy

Distinguished


I'm not sure what you mean by "We all remember GTA 4... They all wanted juice form CPUs but only used 1-2 cores". I currently own a Phenom II X4 965 and CPU usage is 95% on Core 0 and between 85-90% on THE REST of the cores, which means it actually makes use of all 4 cores this CPU has.
The recommended system requirements actually state you'd need a quad (Q6600, if I'm not wrong) in order to achieve the best gameplay experience.
I constantly monitor Core usage and it's clear enough for me that GTA IV uses all 4 cores on my 965.
 


I've yet to run a game that runs significantly faster with any number of affinity settings. Granted, it may be preferable, if a game doesn't scale beyond a few cores, to disable the mask for CPU's 0 and 1, as those two cores get the majority of the normal OS workload (and steal cycles from your application in question). But I can count the number of times toying with the affinity has lead to a noticeable performance gain under a low-workload desktop condition on my hand.

I wish there were easily available debug tools out there so I could show, in an easy to read format, the number of threads games use, and the number that do any meaningful amount of work.
 
I should also note, most scheduling algorithms are concerned about maximum throughput, not load balancing. As a result, once a thread gets assigned to a particular core, it is very unlikely it will ever be moved to a different core (that being said, to assume this is always so is always incorrect). The reason for this is because the core-dependent cache (typically the L2) will likely still have some resources the thread needs, hence you reduce the amount of time needed to do operations.

As a result, the first two cores typically end up doing the bulk of the work. Some programmers then try and "outsmart" the scheduler, by doing something akin to:
Thread1:=Core0
Thread2:=Core1
because Thread1 and Thread2 are reasonably independent. Nevermind how many potential cores are in the system, or that any thread spawned by Thread1 and Thread2 will now have their core preference set to the two most overworked cores in the system. [If the programmer was REALLY stupid, they also set the ProcessorAffinityMask to run on just Core0 and Core1, which is even stupider).

In that case, yes, setting the processor affinity could help if two cores get overloaded.

[For those interested, when I did have two reasonably independent, high workload threads that I want to ensure go on different cores, I save the result of GetCurrentProcessorNumber (Vista/Server2003 or higher only) of the first, and set the mask for the second to disable JUST that one core, so it can run on any core EXCEPT the one the first thread is running on. I then set the first threads Ideal processer (via SetThreadIdealProcessor) to the core it was last running on to try and keep it running on that core, if at all possible. No ideal, but gets the job done.

That being said, I never do this until I benchmark the problem and see that going through this is actually needed, performance wise.].
 
http://benchmark3d.com/microstutter-case-study-skyrim-wtf-edition
How bout that? All good know with just a few clicks. Remember that the above test was done at Ultra Settings – 1920×1080. The microstutter pattern is all gone while still enjoying the highest possible graphics quality. Worth mentioning is that the performance measured in FPS was exactly the same between the first test and this one…but the gameplay smoothness is greatly improved.

 

jdwii

Splendid



I've tested GTA4 many of times and it uses 3 cores not 4 what you are seeing is something called core jumping, Download Fraps and go to the task messenger and only enable 3 cores for GTA4 and then watch your FPS and then enable all 4 you will notice that the FPS stays the same.
 

3ogdy

Distinguished


I'm not sure about that but I will definitely give it a try.
I don't know whether playing in Eyefinity would make it throw more load on the CPU, as it obviously happens to be the case with the GPU.
Still, I get around 55FPS in Eyefinity with the 6950 in charge of the graphics but I'm still thinking of upgrading the CPU.(to another AMD, obviously, the Sabertooth R2 still has a lot of life left in it - mine is not even half a year old after all)
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860

thats crazy, just tried it, fps stayed the same but looks a little more fluid. I had some graphical glitches tho, items would disappear right in front of me on the screen. could still loot it but it wasn't visible.
 


Today's continuation: http://benchmark3d.com/microstutter-case-study-skyrim-diving-deeper

And regarding the build:

Intel Pentium G860: USD$75
MSI H61M-P31 (G3): USD$50
G.SKILL NS Series 4GB 1333Mhz x2: USD$42 ($21 each)
SAPPHIRE Radeon HD 7770: USD$115

From Newegg today. I have to say I assembled that little thing for a niece of mine this Christmas; it's quite the competent little machine.

Cheers!

EDIT: Forgot the RAM price.
 


I had a feeling you were going to go to the really cannon fodder end, which I tried to avoid I was looking at a budget with some bells and whistles to go with it hence why not cheaping out to much but I could probably shave off a lot if needed. I do agree it is a good enough setup for family machine and kids or mum but lacks a bit of a midas touch when it comes to feeling like a gaming rig.

G860 v A10

Intel does hold the x86 performance advantage but the gap is closing in gaming terms the A10 has shown on discrete cards to game between a i3 and lower end i5 performance on a miserly budget to go with enthusiast features it makes up a lot of the draw back in single x86 metrics by advancing multicore performance and HSA along with a much improved IMC.

My feeling here is the G860 is cheap and will win in some instances but it is about $50 less than a A10 and $30 than a A8 so its hard for me to say which is favorable, I do think if you are pushing ultimate budget ie: no discrete card in this regard the A10/A8 is light years ahead on value for money.

A85 vs H61

Well this is a comprehensive beat down to even bother about comparing, bear in mind I took a Extreme 6 at $100 when I could easily have taken a $70 Extreme 4 M with just a few PCI legacy slots removed and the less flashy old school SATA ports but the same motherboard with 4Dimms capable of 3200mhz overclocks, 2x PCI e 2.0 slots 16/8 or 8/8, 7 Sata6GB/s, 6 USB 3.0, 3 USB 2.0, HDMI, HDMI mini, DVI, VGA capable of Eyefinity 3, 8+2Phase VRM's, Passive VRM and mosfet cooling and chipset cooling, Lucid MVP, Diagnostic tools.

I think this one is a dead heat the A85 at a similar cost decimates any intel chipset not Z77 or higher end Z77.

RAM

here is where the APU costs, Faster RAM equals masssive iGPU performance so its worthwhile making a effort to buy a DDR2400 kit.

APU+Faster RAM = significant performance difference, Anandtech showed that this is not the case with Intel where only modest difference is made.

GPU

The APU is very competant with a discrete GPU either in dual graphics or a higher end GPU, tests have shown in some cases matching the i3 others show it competing between i3 and i5 performance, I have my APU with 7850 running BF3 multiplayer maxed out around 50FPS which is ironically what a 965BE and i7 920 scored so very impressive for something not deemed impressive.

BUDGET OF ALL BUDGETS

A8-5600k $100
ASRock A85 Extreme 4 M $80
G.Skill TridentX 2400 $70

or

A10-5800k $120

Core gaming system minus drives, power and chassis for $250 or less which anhilates anything Intel has on the same specs and price bracket, not even a overclocked to hell i7 3770K can compete iGPU wise not to mention the serious costs involved.

As king of budget AMD is undisputed, this is not like real steel where a lighty metal robot runs a mighty robot for feel good factor, in this regard AMD is Zeus and beat Atoms head off from the get go and there is nothing to challenge it. We will still be comparing Haswell GT to Llano and Trinity and still no champion will be found watch the space.


 


Can't view at work; could be an issue with the threads hopping cores too often, which really should NOT be happening. Could also be some wierd lockup within the game engine itself that is not affecting the GPU (basically, the GPU is rendering the same frame multiple times, so the FPS count remains intact, but the game runs horridly). What CPU was used for this test?

And BTW, this goes back to my entire discussion of FPS being a HORRID benchmarking tool; I'm glad noob eventually came around to my point of view on that topic. :D
 
Status
Not open for further replies.