AMD CPU speculation... and expert conjecture

Page 32 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810


Noobs terminology for this is "Minimum FPS in one frame intervals", never mind the extensive API, code, and methods to tap into the actual rendering pipeline.
But it is all BS because it shows Intel procs are better than AMD procs :whistle:
 


Very odd then. Might be a thread bouncing issue. Really hard to tell without access to a decent set of debug tools...

That being said, I can be reasonably sure that case is an exception, not a rule. If a 2500k/6950 had massive lagginess on Skyrim (or any other GOTY canidate), I'm sure we'd all be hearing about it.
 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810
But the general consensus is that Skyrim has a older engine, poorer for scaling to multiple cores.
In fact, before the 1.3 (or 1.5 ?) patch, it was extremely unplayable on procs < 4GHZ clock. If anyone remembers, there was the SkyBoost mod, which improved performance on pre 1.3 (or 1.5?) patched Skyrim.
IIRC, it hardcoded some addresses and functions in a file, (Gamerk316 can prolly understand better) which the modified exe could read. The author of the mod said Bethesda devs "did not do even the most basic of optimisations".
 
^^ yeah, specifically, they didn't compile the application as LAA, meaning it was incapable of using more then 2GB address space, even on Win64. For a game the size of Skyrim, thats a no-no.

Given Bethdesia's issues getting Skyrim (heck, even Fallout) running on the PS3 though, you get the sense the engine is VERY unoptimized. Not to mention all the bugs it has. [To this day, on three separate machines, two different OS's and two versions of the game, I STILL can't get Fallout to run without crashing at the "walk to daddy" scene.]
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860

still stuck on that one eh?

But rather than average those out by each second of time, what if you looked at each frame individually?
http://www.pcper.com/reviews/Graph [...] nce-Metric

metro 2033 also benches in one frame intervals. Instead of posting the SPF aka "frame time", its posted in FPS.
Time(in seconds)/frames = SPF
Frames/Time (in sec) = FPS

Time is not a constant, you can change the value. Frames is not a constant either.

The bottom line is FPS doesn't have to be at exactly 1 second intervals, you can have it at 1 minute, 1 hr, 1 day, or 1 frame.

As for the skyrim discussion, why does affinity to one core drop graphics? makes it an unacceptable "fix".

Interesting however that when you set affinity to anything over 1 core, even if its just 2, it still loads atiumdag.dll 4 times.
 

cgner

Honorable
Aug 26, 2012
461
0
10,810



I kept telling people that Skyrim used 1 core and needed high clocks, but they did not believe me.... :na:
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860
just because a program can be forced to run on 1 core doesn't mean it just only uses 1 core, see the notes about the addition of a cpu bottleneck on 1 core on the I5.

1 core = graphics dropout, how is that "uses 1 core"?

 
If you can get all the performance from 1 core then its great. We can't physically do it.

Why did we even make dual core systems going by some people's logic. We should have stuck with p4s logic. We could be hitting 5ghz about now on 1 core. Obviously nobody 2 cores drop efficiency too far to bother.
 


Do remember one point though: If you have a theoretical core that runs at 5GHz, and an identical Dual Core that runs at 2.5GHz, the single core will be slightly faster, simply due to load management and cache management. More cores are simply a way around the fact that CPU performance (IPC and Speed) isn't increasing as fast as it used to.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860
showing a clear gpu bottleneck as you lower the gpu to the point where it does start to show. Look at the 7970, thats 16% to the lowly A10 5800k.

add more and more gpu power, the bottleneck goes farther and farther to the cpu. The thing is metro 2033 is very well coded, but doesn't automatically mean a "gpu bottleneck" in all systems with quad sli gtx 680 video cards.

http://uk.hardware.info/reviews/2661/9/nvidia-geforce-gtx-680-quad-sli-reviewed-metro-2033

put the e6400 and watch it stop at 28-30 fps. is that a gpu bottleneck considering the crossfire x 7970 pushed 64?

the whole gpu/cpu bottleneck is only looked at with randomly picked x card. Or in this case, pair video cards with the fastest cpu available so that it doesn't bottleneck, even though at low resolutions, the cpu appears to bottleneck the quad setups pretty badly, the 7970 quad actually slower than tri 7970,

why not show WHERE a cpu bottleneck occurs. for the e6400 and metro 2033, appears to start at the gtx 580. Start shutting off cores or lowering clock speed on the 3960x and see when it starts to drop performance.

the thing is, bottlenecking is a balancing act between cpu and gpu power. You can't test game y with low end video card x and say its a gpu bound game no matter what cpu you have, thats just blatently false for anyone who has any gpu more powerful than what was tested with.

another reason simple "fps" testing isn't that accurate. If your testing cpus, put quad gpus and see where they rank.
 


Uhm... But you can't compare the 7770 to the 7650G (i think) inside the APUs =/

I do agree that a "dirt cheap" computer would just rely on it's iGPU, but you really won't do gaming on it in the same level as you would with a 7770. And I also said that the FX line had nothing to compete in the "dirt cheap" scenarios.

Now... The Llano A8 really falls flat on its face when compared against any discrete solution out there that costs a little more than USD$60. I can't swap the A8 I have to take a look at Trinity ones, but I'd say the story is not far from the Llano APUs in regards to testing them against any video card around USD$60 + Intel. You could use a G6xx as well, since they only lack DDR3 1333 support and save a couple of greens. They're very near in price points for "cheap gaming econoboxes".

-----

You guys didn't read the second part of the article, huh... Jeez. Long story short, it looks like the AMD drivers had something to do at the end.

Cheers!
 

jdwii

Splendid
Is it really only about desktops? Those APU's kick some serious butt on games on the cheap and even though this site is probably still in denial about people playing games on a 400-600$ laptop its extremely possible with a A8 or a A10. I was at walmart today and my laptop is only 470$ and that comes with a A8 3520M (if you went Intel you would have to get a Pentium or a very low-end I3) i can play many games. i'm playing Age of empires 3 at 1080P with my TV just fine and i'm playing Deus ex on this thing as well as dead island at 720P maxed, Plus i can play resident evil 5 and 4 fine. Not to mention the A8 is real easy to overclock i have mine at 1.9GZ all cores with the voltage at 1.100V. Now if only i could overclock the IGPU.


These APU's are not really aimed at desktops sure OEMs use them but that's because their dirt cheap to put into a desktop and they only use the A4 or A6 APU i noticed and charge 350-400$ for a desktop. As for custom builders which is a extremely low market you can still build a A10 desktop for around 350-400$ or upgrade your old machine and get a board+APU+RAM and get a new PC with pretty good performance for around 220$ With Phenom II x4 level performance with a 9800gt level video card plus i think the FM2 boards will be compatible with the newer APU's. Not to mention you can overclock this APU to around 4.6Ghz fairly easy.

I'm kinda wondering what intel haswell is going to bring in the IGPU market if the rumors are true.

Personally if i was going to go with Intel for gaming it would be a I5 at the very least which is 220$ on newegg(not bad really). To save money I would get a 6300 for 140$ and save that extra 80$ for a better video card(80$ can make the difference of a 7750 and a 7850)
 

3ogdy

Distinguished


Yes, Steamroller is going to be on AM3+. I know I read that AMD was going to release more CPUs on the AM3+ platform after Piledriver and I really doubt Steamroller would be the last AM3+ CPU they make.

http://hexus.net/tech/news/mainboard/45889-amd-looks-standardise-sockets-am3-fm2/

http://forums.pureoverclock.com/amd/18812-amd-sticks-socket-am3-steamroller.html

Long live AMD.
 

amdfangirl

Expert
Ambassador
Steamroller will likely be on AM3+, mainly because of die space/yield concerns.

The Piledriver die is large enough. Larger dies = lower yields. It wouldn't be too bad if AMD made a killing off each CPU sold, but they aren't. AMD needs to reach at least the Sandy Bridge level of performance/ die area to justify putting in a GPU.
 


That is true, and yes at bottom line iGPU is the criteria but I am running a A10 which I replaced with a A8 5600K running it with a HD 7870 GE and getting around 72FPS in BF3 MP 64 man map, which is ironically more than the i3 achieves with the same GPU albeit marginal so as for gaming discrete or integrated the A-Series represents the best value right now at budget not only on performance but connectivity and features.

The cheapest FX you can build is around just estimating now will double check.

FX 4170 $120
ASUS 890 GTD or 970 D3 $70
DDR3 1600 $30 (cheapest kit at 8GB)
HD7770 $130

$350 give or take. That said with Phenom II x 6 1090T at $100 that is feasible to.
 
Steamroller will be on AM3+ and likely even Excavator, maybe just a chipset refresh but the socket will be the same.

It seems like the Toms CPU of the month article has undergone a major shake up, at least it looks reasonable now, though the FX83XX parts easily game around the same level as the i5's should occupy the lower top tier while FX6300, 4300, 1100T should game at the top to spots of the second tier. In my experiences a Trinity and likely its Athlon II equivilants will game around a i3/Pentium or higher end Core2 Q and Phenom II quite easily.
 
They interviewed a head spokesmen from AMD late last year in the build up to Vishera's release and socket stability remains AMD's top agenda. We have already seen Mobo vendor Asus adapt the Crosshair V Formula with the Z update which features faster memory speed, updated NB and chipset, updated sound blaster and faster and better PCI-e slots.
 
I am sure in time it will come but the major necessity for motherboards comes down to RAM speeds along with tidying up a few interfaces but the ASUS Crosshair V Formula Z shows enough performance yield over the Crosshair V Formula to suggest that anything more than a refresh is not really needed at this juncture, I am sure AMD will at some point refresh its sockets but I am assuming that may only be Excavator or After depending on which rumours one goes by.

I am sure the NB will eventually go but I don't think it was the limiting factor in Vishera, after all Vishera almost doubled its IMC throughput over Zambezi which was considerably faster than Thuban/Deneb, The improvements no less saw similar kind of Memory throughput as on X58 setups which is no bad shanks considering it is essentially a Zambezi rehash.
 
Status
Not open for further replies.