AMD CPU speculation... and expert conjecture

Page 22 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810
Right now, it's not happening. One day, I think it will have to.

w.r.t games, i dont think we will see more threading. Reason : No need. Games are not going to get more realistic any more, without going deep into physical modeling. And that is too computationally expensive now.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860

you keep saying its impossible from a programming point. here you are stating the truth, no one wants to bother spending the money. Its not that it simply CAN'T BE DONE, its simply "not cost effective"

When Intel quits making dual core cpus, they will offer vendors some dough to rework the software. Thats one way they keep their edge, market software to their hardware. TSX for example won't make it own its own, Intel will have to push it with money, make their cpus look superior so they can sell more. Same with AMD's OpenCL, needs to feend vendors some money for them to use it.

, because they found after about 32 or so CPU's, they simply COULD NOT SCALE ANYMORE

and this is why we have a hard time getting past 4? not to mention software hasn't changed a bit since the 80s.
 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810


LRB users are working on problems that already are embarrasingly parallel.
Its just a choice , whether to use CUDA, OpenCL or x86.
 


Incorrect.

First and foremost, the silly assumption of more threads = more performance, which is probably false. The more threads you introduce, the more arbitarilly you have the break up your computations, and because you CAN NOT GUARANTEE WHEN THE OS IS GOING TO SCHEDULE A THREAD, the more you break up your computations, the longer the computation is going to take, by virtue of threads not getting run in a timely manner.

Secondly, games typically use 40+ threads easy. Heck, most game launchers are typically in the teens. Problem is, as noted above, its not efficient to make every thread a high workload thread, so you see uneven performance when you look at work done per CPU core in Task Manager.

you keep saying its impossible from a programming point. here you are stating the truth, no one wants to bother spending the money. Its not that it simply CAN'T BE DONE, its simply "not cost effective"

No, what I said was that its not cost effective to constantly re-develop game engines on a game-by-game basis to sqeeze every ounce of power possible out of them. That is independent of the threading issue.

When Intel quits making dual core cpus, they will offer vendors some dough to rework the software. Thats one way they keep their edge, market software to their hardware. TSX for example won't make it own its own, Intel will have to push it with money, make their cpus look superior so they can sell more. Same with AMD's OpenCL, needs to feend vendors some money for them to use it.

Please, stop it. I've explained, as clearly as I can, using REAL WORLD EXAMPLES [MIT in the 80's], why threading beyond a certain point leads to negative results. And every software engineer knows this.

Understand also what OpenCL does: It is mearly a framework that allows co-development using both the CPU and GPU, rather then just the CPU. On its own, it brings no inherent threading benefit. When you look at the tasks that made early use of OpenCL, you see programs that by their nature, do scale well (encoding, for instance, can be broken up into small chunks). CUDA has similar issues; you need to feed it very large datasets to see any real performance out of it (which makes sense, given the GPU architecture). Without a large enough dataset, performance on CUDA is horrid. But hundreds of GPU cores, given enough work, will outperform four CPU cores, no matter how fast they are.


Now, if you care to argue, go find someone who's written software for both commercial and integrated system, has designed game engines, and has developed internals for a few proprietary OS's. I doubt anyone with that criteria is going to seriously disagree with me on this topic. Based on the current way computers themselves are designed, you are NOT going to scale well in 90%+ of all tasks. And the stuff that does scale well tends to be due to very large datasets, rather then by virtue of design.
 


Well, Ray Tracing solves a lot of problems on the graphical front, but we aren't there in terms of the necessary processing power yet. Still, all the advanced lighting effects that current methods try to emulate are a natural consequence of Ray Tracing, with no performance hit to boot.

The main issue with current methods is some effects are simply ridiculously expensive computationally to produce. Light scattering, reflection and refraction are near impossible to handle realistically. So you get cheap attempts to make minor improvements over time as resources improve, but still don't get much closer to perfectly replicating these effects. But these are a natural outcome of Ray Tracing, and are basically free to compute.

Now, Ray Tracing will never be 100% photo accurate, but even if its only 80% photo accurate, it would be a heck of a lot cheaper (computationally) then using the current methodology to reach that point.

I'm expecting we'll start seeing a LOT of Ray Tracing/Casting demos over the next few years...
 
And this is a new starting point.
Its been needed. Its not that it cant be done, but currently starting from where we are at now wont get us there.

The x86 model is being tried in new ways, with extensions, it will slowly evolve, but to buck the system, well again, it comes down to money, investment, returns and ease/understanding and widespead applicability.
Regardless of old methods tried before, we arent too far into multi core usage at all, and the enterprise fields dont correlate to DT, as their SW/workloads are optimised per usage, simply not possible on DT
 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810


Sort of like Tessalation. IIRC, it was introduced to lessen the burden of programmers in textures, with free computation cost.
But the current implementation is nowhere near that vision. Tessalation is a "added feature", rather than something built intrinsically in the engine. And enabling tessalation takes a huge performance hit.
 

dragon5677

Distinguished
Apr 28, 2010
158
0
18,680
I hope AMD does a 360 from their current situation, like releasing a beast of an FX processor and sending Haswell on vacation.......such a improbable dream lol

But seriously I keep hearing several years back the FX series scared the *** out of intel?
I hear that a lot I was only a kid back then...
 

m32

Honorable
Apr 15, 2012
387
0
10,810
Poor performance and monster power draw were the flaws of FX. If it would've corrected one of those, I think AMD would be headed in the right direction.

One can only hope SR comes out right. I don't care if it gets delayed..... just get it right.
 

doing a 360 would mean they keep going the same direction
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860

This is why Intel bribed vendors and strangled AMD's R&D budget to nothing.
 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810


Intel strangled AMD's R&D budget ? Is intel on AMD's board ?
You know what, i think Intel are fcuking brilliant to do this. Smothering competition is the first step to get huge profits.

Edit : You forgot to add that Intel compilers actively gimp code execution on AMD processors. Then Gamerk316 could have told you that 97% windows devs use MSVC for compiling, which makes equally bad code on both Intel and AMD.
 
^^ Not bad; MSVC actually has a VERY good optimizer stage. But Intels compiler yields the best code output for BOTH Intel and AMD (at least as far as my limited testing goes). Intel just optimizes more for Intel then AMD.

Seriously, do people not realize that Visual C 6 is probably still the most used compiler out there? Organizations typically do NOT invest much in their toolchain...
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860
wonder why review sites only use select few games for their "benchmarking"? Kinda funny when you look into it who helped develop those few games.

any other "benchmark" is dubbed a "gpu bottleneck" and thrown out, even tho its more the truth that AMD is only slightly behind Intel when the "benchmark" is not compiled by Intel.

Yes, it may be brilliant to stifle the competition into oblivion, its also illegal. Thats the only thing that stopped Intel from killing AMD completely. Some people however feel that Monopolies are a good thing to have. Im almost hoping for that to happen so I can laugh in your face when you have to shell out $500 for an I3 and $700 for a motherboard.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860


Really? can't at all ... ya ..

http://www.ncsa.illinois.edu/UserInfo/Resources/Software/Intel/Compilers/10.0/main_cls/mergedProjects/optaps_cls/common/optaps_openmp_thread_affinity.htm

http://msdn.microsoft.com/en-us/library/ms684251(v=vs.85).aspx

Kinda looks like to me you can pretty much tell a specific thread from letting the os decide to narrowing it down to 4, 2, or 1 specific core in an 8 core system.

Its even available on the Intel compiler. http://software.intel.com/en-us/articles/thread-affinity-compiler-options-and-environment-variables/
 


*sigh*

Thread Affinity simply tells a thread what processor its allowed to run it. This has NOTHING to do about when the OS decides to schedule a thread.

Secondly, barring VERY controlled circumstances, it does not make any sense whatsoever to mess with the default thread affinity. The only time this makes sense on a user-land program on Windows is if you want to ensure two threads don't share the same core. The reason is simple: You can not guarantee that your thread is going to run when you want it to run. If you limit your thread affinity to one processor core, and some higher priority task is already running on that core, your thread will not run. Granted, Windows adjusts priority over time, but this WILL degrade performance.

So as per usual, you are confusing (and I suspect intentionally at this point) two totally different subjects: Thread Affinity and Scheduling.
 


Stop it: You do not bench the CPU with using a benchmark whos performance is bound by the GPU. Its really that simple. Thats also why FPS is a horrid tool for benchmarking, and why Techreports methodology of measuring average frame latency is far superior.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860

so if you tell the program to run a thread on X cpu and only X cpu ... it will randomly run or not run ... makes perfect sense ... especially when you program that particular program to use the same core over and over? What programmer does this? then again, maybe you should quit trying to run 2-3 high cpu usage programs at the same time.

Ill run skyrim, bf3, and world of warcraft all at the same time ... because thats the only way to test my program to see if it slows down.

If this is how programs are tested, no wonder they will never scale past 2 cores. you need the other 6 cores for the other 3 programs that are running.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860

rofl, you and your love for techspot's alternate wording for fps measured in 1 frame intervals.

like I said. all those "cpu bound" games are supported by Intel, posted on Intel's website for being optimized for Intel. That makes it ok to bench AMD hardware on and expect the results to be accurate simply because it runs slower on AMD?
 
Status
Not open for further replies.