AMD CPU speculation... and expert conjecture

Page 21 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860

For this reason, programmers don't see the need to try. too much time and effort for small gains, no gains is better because its cheap to copy/paste. prime example, oblivion: skyrim. Id say 80% of the actual game engine code is copy/paste.
 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810


Time and effort cost money. So unless someone is explicitly paying the developer to bring more parallelization, reusing code is 100 times easier.
Oh yeah, and while the developer is doing a complete recode of the engine, just ask him to hand optimize the code in ASM.
 

viridiancrystal

Distinguished
Jul 27, 2011
444
0
18,790

Well, think of it this way. You better optimize your code, and you can make your game look better. Your game looks better, you have a great marketing point.

Just think, if a game like Starcraft II was optimized to scale really well to 4-6 cores, instead of having battles of 100 units, you could have 300 units on each side. That would just be good for the game. More fun to play, and more fun to watch (considering eSports).

I would love to see game developers pushing the limits, but they must not think its worth the investment.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810



It's the limits of capitalism. When the share holder becomes more important than the customer. It's all about maximizing profit for minimal effort.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


Nothing new coming out until Q2. CES hits in 2 weeks though.
 

griptwister

Distinguished
Oct 7, 2012
1,437
0
19,460


I know! should be interesting though... I'm just looking forward to the GPU side of things. I feel bad for AMD, I may just go intel if they lower the prices on the i5 come June 2nd.
 

R0ck3tm4n

Honorable
Mar 4, 2012
3
0
10,510


If thread 1 performs (A + B) while thread 2 performs (D + F) then a third thread performs ((thread 1 result) + (thread 2 result)) then some of that work was done in parallel.

I wrote why the experiment could not be duplicated. There was an electromagnetic component that the experimenters were unaware of which was essential to the experiment.

What we would like is for small pieces of work to be done by lots of cpu cores. This may not be possible. Perhaps larger pieces of work must be used, perhaps the problem would need to be set up differently, perhaps we require an entirely new way of computing, either computationally or by design.
 

griptwister

Distinguished
Oct 7, 2012
1,437
0
19,460


As Samsung goes down the... **Flush**er

*edit* I think AMD will start doing better with out that guy... hopefully.
 

griptwister

Distinguished
Oct 7, 2012
1,437
0
19,460


Wait... are we supposed to see new Vishera FX processors?

"And AMD will soon add the following products to the mix:

Vishera: FX
Richland: A10/A8/Athlon/PRO (socket APUs and BGA ULV APUs)
Kabini: A6/A4/E2/E1 (BGA only. So, AMD is going to make more then 90% of their notebook APUs without a socket. High hopes for Jaguar?)
Temash as well if it is another die, or even brand."
 

that's not directly related to new vishera cpu release. amd released cpus like fx 8100, 8200, "8170" with higher/lower tdp and/or 100 mhz higher/lower clockrate. they're releasing a new fx cpu soon - 95w fx 8300.
http://www.xbitlabs.com/news/cpu/display/20121226230100_AMD_s_Power_Efficient_Eight_Core_Desktop_Chip_Due_This_Week.html
more vishera cpus with diff tdp and clockrate will come out in the future. i assume you're thinking of an all new full lineup of cpus. if the rumors are on target, that won't be happening for am3+ platform this year. here's a far-fetched and highly unlikely scenario: may be amd will release current arch on 28nm without any tweaks.... time will tell.
 

griptwister

Distinguished
Oct 7, 2012
1,437
0
19,460


Interesting... And it's going to be priced more. I'm curious about the products performance.
 


You know how game genie/gameshark works? They directly edit the memory address where some data is stored. Note this address NEVER changes; its hardcoded in. THATS the level consoles are coded to.

I'm actually worried the next gen consoles are TOO powerful; they'll be coded similar to PC's, so you won't get the low level optimizations you're used to seeing...
 


And how many extra sales would that bring, at the cost of how many man hours? Nevermind the game breaking bugs due to recoding half the game engine every time we make a new title. Then you whine and complain about the game being a "console port". Just no pleasing some people is there?
 


I was HOPING you would go there. If you actually tried that, you would find that the majority of the time, you'd lose performance due to the overhead of creating the threads for such a minimal task. Nevermind the MASSIVE (in relative terms) performance hit if some high level OS thread happened to come in and kick out one of those sub threads before it finished, putting the entire computation to a screeching halt.

So yeah, simple example here, but you can see how things quickly go off the rails.

Look, we've TRIED going parallel since the 70's. MIT tried in the 80's to put together a machine with 10,000 CPU's wired together. They got about 10% running, then stopped, because they found after about 32 or so CPU's, they simply COULD NOT SCALE ANYMORE. Aside from VERY specialized tasks (like rendering), you simply are not going to see entire programs scale. Hence OpenMP, which attempts to make PARTS of programs scale slightly better.

I wrote why the experiment could not be duplicated. There was an electromagnetic component that the experimenters were unaware of which was essential to the experiment.

Or, they failed to check their error bars before running the first test.

the_economic_argument.png


What we would like is for small pieces of work to be done by lots of cpu cores. This may not be possible. Perhaps larger pieces of work must be used, perhaps the problem would need to be set up differently, perhaps we require an entirely new way of computing, either computationally or by design.

Its called "Quantum Computing". You'll hear more about it in about a decade.
 

viridiancrystal

Distinguished
Jul 27, 2011
444
0
18,790


I never said it would be cost effective or easy, I just pointed out a benefit to well optimized code. There are three things I think that need to happen for developers to pour money into doing something about that.
1) Good competition in software development (which there is, as far as games go)
2) A good sum of money to put to use. (Right now, that's rare)
3) The need to, the point where the current amount of used threads is not simply not good enough. (Apparently not an issue)

Right now, it's not happening. One day, I think it will have to.
 
Status
Not open for further replies.