AMD CPU speculation... and expert conjecture

Page 177 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

GOM3RPLY3R

Honorable
Mar 16, 2013
658
0
11,010


I just want to say, keep in mind that the architecture of the games is MUCH lower end, so the power it needs it minimal. For example, you can take a PS3 and a PC game and compare them to eachother. There is a PS3 Emulator out there where you can use a PS3 blu-ray drive and use software that makes its possible to play it with your systems hardware. Lets take BF3 for instance. If you compared the quality of the games, with the pc version on the low preset, or even the lowest settings possible, and ran them full screen at 720p, there would be still a much better quality on the PC. Yeah it'll be a load of more frames on the console version, but only because the architecture is very simple.

Now yes, you will spend about $1000+ for a system that can run the pc version on a least high settings at 30+ frames, but the overall quality of the picture would be magnificent, and plus its at 1080p instead of 720p.

Just as a reference, here is a video from about a year ago from NCIX between the Xbox 360 and a PC with a GTX 670 and i7-3770k both at stock clock settings.
 
Just remember guys TDP does not mean the actual power usage only what's been engineered as the maximum that an OEM should use. That's why the "K" chips have a higher TDP then the non-K chips, they don't natively run hotter, only have a much higher engineered maximum heat. This is only important for things like auto-OC / boosting.
 
Gamer you only demonstrated their executable launcher was with MSVC, which considering it's the #1 developer tool of choice makes complete sense. You know that actual code exists in the various libraries that come packaged with a program, those are what need to be checked. Finally MSVC can be configured to pass it through an optional compiler, you still get a final executable stamped with MCVC but it wasn't he MS compiler the actually did the binary language. When I get home I'll pull up a bunch of signatures and demonstrate.
 

hcl123

Honorable
Mar 18, 2013
425
0
10,780



But of course, most are DirectX based, which is a MSFT standard, no game is ICC based, i could had told you that without any analyses.

And the problem is not the compiler, i can cheat how much i want using exactly the same compiler... a question of flags, which can be dozens, but games is a different bread altogether.

But i think its enough, everybody sees what they want to see, including devs with flags, or not care about what is the most crucial of all... its not my fault, and its perfectly fine with me.

Side note: there is only one thing that can make call BS... and that is md5 signatures in those blobs. And this is very illustrative, its getting aggressive because of pieces of code that have no guaranties of anything not even origin (where is the md5)

Isn't it ironic ?? lol ... insane even brrrr...

EDIT:
people like you continue to propegate the myth that theres some systematic bias against AMD, and if only this were to go away, AMD would outperform Intel by double in all situations. Stop it.

will be delighted, don't want to ignore no one, but see compilers in action:
http://www.phoronix.com/scan.php?page=article&item=llvm_clang33_3way&num=4

What is that with Postgres pgbench !?... a Fx8350 almost 7x (seven times) faster than a i7 3960x ?

quite possible happens all the time, compiler has LOTS of influence, represents not much (as eveyrone else bench)... but some only see biasing, this time clearly against intel
 

McTash

Honorable
Jun 8, 2013
37
0
10,530


DirectX is an abstraction layer over the system and drivers is it not (like an expanded openGL)? What are the system drivers compiled with? I'd assume that AMD would not use the Intel compiler for their drivers.

EDIT: quick research indicates that trying to build Windows drivers with anything other than the microsoft compiler can be problematic.
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780


I have done my own research by taking open source programs, running benchmarks in Windows, and then comparing it to Gentoo with optimized flags.

I have only tested Firefox, x264, Blender, and LAME, however

1. Firefox shows little gain in kraken, I think I am doing something wrong with my CFLAGS and USE flags. I didn't run other browser benchmarks.
2. x264 is a benchmark where FX 8350 is very strong, it showed about a 10% increase in performance
3. Blender is a benchmark where FX 8350 does poorly but not overly poorly. I saw a doubling in performance by using Gentoo
4. LAME saw a 60% increase, which would actually make FX 8350 stock faster than 3770k stock in LAME as per Tom's Hardware FX 8350 review.

So, overall, I'm pretty much seeing between 3% gain and more than 100% gain just by playing with the same compiler. It is rather foolish to take CPU benchmarks at face value when you have no idea how they were compiled, let alone with what. But even knowing with what can mean little, in the end it depends on choices developers made, and that more than likely boils down to money, something AMD has traditionally not thrown at developers.

I really wish I had a modern Intel to compare against (I only have i7 920 and it needs a new PSU, or it's just dead from overclocking).
 
Desktop markets are diverse but if you are refering to Gamers and Clockers then high powered silicon candy is still a draw and this is why AMD have diversified its products with APU's, Kabini suits lower powered systems and media platforms or regular work benches etc, while allowing the FX9000 which I dub AMD extreme line up to be aggressive on clock speeds as we already know FX is for a specific user only it may end up AMD'd dedicated enthusiast line up while Richland/Trinity, Kabini and Tamash fill in the spaces for the more profitable sectors. The long and short of what I am trying to say is these new FX parts like FX in general should only be viewed as gaming or enthusiast platforms rather than general purpose systems.

If say the base clock is 4ghz with a 5ghz turbo on 2,4,8 core profiles, we know that turbo boosts single threaded performance which is currently what drives games.
 

turboflame

Distinguished
Aug 6, 2006
1,046
0
19,290


Most gamers aren't well informed of the Xbox One's flaws and will buy it anyway. I remember when the PS3 came out it looked like a complete disaster. It had a Blu-ray drive equipped when there was still a format war with HD-DVD, the Cell processor was a developer's nightmare, expensive XDR RAM for the system memory (didn't even have a unified memory pool like the 360), an inferior GPU and of course the 599 US dollars it retailed for.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860
found an interesting site, polish site.

bit of kaveri info, most interesting is on page 3, gddr5 sodimm is mechanically compatible with ddr4 dimm ...

http://translate.googleusercontent.com/translate_c?act=url&depth=1&hl=en&ie=UTF8&prev=_t&rurl=translate.google.com&sl=auto&tl=en&u=http://pclab.pl/art52841.html&usg=ALkJrhh0iAFJPH6qrjLrE_glPVy4IFAc7A

and if you ever wondered how the oldies stand up to today's standard ...

http://translate.googleusercontent.com/translate_c?act=url&depth=1&hl=en&ie=UTF8&prev=_t&rurl=translate.google.com&sl=auto&tl=en&u=http://pclab.pl/art50000.html&usg=ALkJrhi4CzHpjUENjrXu2qnqFEOzQjtNrg

160 cpus, 52 tests, 113 pages ... thats a lot of input.

 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860


the blu-ray drive in the ps3 is one of the main factors that won sony the hd war.

the key for xbox 1 is ES ram, should be low latency. Wich one will be faster? ... guess when they get tested, we will see where speed (ps4) vs latency (xb1) comes in.
 

:O overwhelming amount of data. too much to wade through let alone the translation requirement.
so i only skimmed through games (duh). bf3 mp (32 man) performance caught my eye - core i3 3240 keeping up with stock vishera(most of them), beating zambezi(most of them) and trinities, thubans and denebs too.... i'd always assumed bf3 mp was one of the games where multicore amd would always definitively beat core i3... :heink: i wonder how consistently repeatable the test was...
i think their overclocked perf/price chart is a bit flawed. they o.c. the cpus but did they take cooling cost into account?

edit:
AMD Athlon X4 760K CPU spotted
http://www.cpu-world.com/news_2013/2013061101_AMD_Athlon_X4_760K_CPU_spotted.html

and a bit off topic....
kitten on a tablet
http://vr-zone.com/articles/exclusivesony-xperia-zu-to-launch-with-android-4-2-2-new-ui-in-tow/36947.html
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I am sorry to inform you that Haswell desktops... are focused to the desktop. But they are a step backward over ivy Bridge desktops.

Haswell GT3e is being massively rejected by notebooks OEMs, because is both power hungry and expensive.

Ask Gigabyte as well. They presented an alternative to the NUC that uses AMD kabini. They have a Haswell version... with serious thermal issues.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Another misleading review from AT. They claim a +50% gain in battery efficiency becoming from Haswell, but that is wrong. The whole new S7 has been re-engineered for power efficiency, including a bigger battery which implies better efficiency, which implies AT normalization is incorrect. Their claimed efficiency gains are pure hype.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


LOL You misunderstood completely the point.

ICC is used in most synthetic benchmarks. This generates a double bias. First bias, because ICC uses the Criple_AMD function, and those benchmarks (e.g. Sysmark) gave fake scores. Second bias, ICC is almost not used in real life software, which implies that those synthetic benchmarks are rather useless even for measuring the real performance of Intel chips.

Regarding games. There is two points to be considered. First, there are games optimized for intel, they are listed in Intel site and include Batman, Skryrim, Civ... Precisely games such as Skryrim, Civ are notorious because run badly on AMD chips.

When you see a review of AMD or a comparison AMD vs Intel that uses four games and two or three are optimized for intel you know what is the fairness of the review/comparison.

Second point, due to old console design most games are poorly threaded and run better in Intel 2-4 core chips than in AMD 4-8 chips. This is going to change with next gen games. Crysis 3 already runs faster in a FX-8350 than in an i7-3770k for instance.
 


People need to realize: Aside from MSVC 2003 and later, everything MSFT has ever put out is either stolen teach, or crap. DOS was purchased, Windows 9x was basically stolen from Xerox, NT was designed by the VAX/VMS guys, MSVC 6 stank next to Borland, IE stank next to netscape, iPod > Zune, etc.

And the saddest part is, they are oblivious to this. Thats why their marketing is always so bad. MSFT is basically incapable of designing and releasing a decent product on its own.
 


The PS3 supported 1080p output; almost all games were 720p native.



I checked the actual .exe, not the launcher (which is a separate program). And no one uses the pass-through functionality of MSVC, mainly because it makes you throw out all your debugging. Nevermind the only way you gain performance is if you go through ICC, which IMO no one uses. Thirdly, the programs I use can pick up multiple signatures, and in NO CASES were non-MSVC signatures present. [Code Obfuscatiors were picked up in all cases, for instance]. I can also check .dll's with minimal effort, and guess what? Same deal: MSVC.

But of course, most are DirectX based, which is a MSFT standard, no game is ICC based, i could had told you that without any analyses.

Any compiler can access the DX runtime; just import the .dll into code, and call the relevant functions. Shouldn't cause the program to compile any differently, since all the .dll's are external to the program.
 


By "most", you mean "a handful". For the cross-platform stuff, I've seen a LOT of GCC.

Secondly, there never was a "cripple-AMD" function. Intel simply decided to optimize more aggressively its own chips. You could always specify via compiler switch what level to compile to.

And as an aside, optimal code would show bias, due to not needing any optimization. Likewise, disabling the optimizer entirely would yield CPU-independent code (though it would run like crap). [As a general rule, when comparing like architectures, disabling the optimizer is probably the *best* way to get a fair performance benchmark]

Regarding games. There is two points to be considered. First, there are games optimized for intel, they are listed in Intel site and include Batman, Skryrim, Civ... Precisely games such as Skryrim, Civ are notorious because run badly on AMD chips.

Question: Anyone care to tell me what "optimized for Intel/AMD" actually means? Because NO ONE plops down any manual CPU opcodes, and its not like you are going to see low-level X86 stubs throughout the code to make Intel/AMD chips run better/worse. Its all 100% marketing, pure and simple. Granted, some games, by their design, are likely biased; you see this in GPU's too: games that need memory bandwidth favor AMD, games that rely on pure shader performance favor NVIDIA. No intential bias, just the outcome of the design chosen.

The only time I can recall ever "biasing" something was shortly after the first C2D's showed up. As the L2 was split oddly (cores 0 and 2 share the same L2, as does 1 and 3), there was a bias toward using cores within the same L2 context. But then again, I only did that for apps whose performance was dominated by cache misses, which was exactly ONE program in about a decade.



The irony is that current consoles NEED to be "threaded" better to run at all, which makes the whole argument silly, because ITS WRONG. The 360 has a tri-core CPU with 2-way SMT, allowing 6 threads to be executed at a time. The PS3 has 7 functional PPE's, allowing 7 threads to be executed at a time. Not bad considering at the time, PC's had JUST introduced dual-core CPU's.

Crysis 3 scales better because the devs moved processing off the GPU, which I again mention is not optimal programming, given how much faster GPU performance grows per generation. You've essentially simply created a CPU bottleneck which won't go away, rather then a short term GPU bottleneck. Simply put, while other, GPU bottlenecked games will run faster generation over generation, you are going to find Crysis 3 performance will NOT improve due to being heavily bottlenecked by the CPU, which is not showing significant performance gains generation over generation.

Seriously, i could peg 16 cores at 100% easy: just handle all the rendering in software. Sure, the program will get maybe 5 FPS, but the program threads well!
 

jdwii

Splendid


"keep in mind that the architecture of the games is MUCH lower end, so the power it needs it minimal."

In a way no, with 50% more shaders and 2.5 times more memory bandwidth every game will look better on a PS4 even more so at launch. Not to mention the PS3 got more exclusives so i'm guessing we will see better looking games on it even more sense those exclusives will push the system.
 

montosaurous

Honorable
Aug 21, 2012
1,055
0
11,360


If Microsoft steals everything then so does Apple. And everything Apple makes is complete crap anyways. The only genius at Apple is their marketing. I'm not defending Microsoft, I'm just saying they could be a lot worse.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860
Secondly, there never was a "cripple-AMD" function. Intel simply decided to optimize more aggressively its own chips. You could always specify via compiler switch what level to compile to.

forcing a cpu to run 386 code without sse, mmx, ect... isn't crippling? rofl. You call it optimize if you like. I call it crippling considering AMD PAID INTEL FOR SSE RIGHTS.

and if you run that switch, how well does backward comatibility work?
 


Nothing is stopping developers from either using another compiler or manually plopping down SSE opcodes.

Its Intel's compiler, they can do whatever the hell they want with it.
 
Status
Not open for further replies.