AMD CPU speculation... and expert conjecture

Page 127 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Maybe the HSA optimizations tilt it towards a non dedicated scenario, tho, at this point, we would have to see.
One could argue too tho, todays chips with HSA is capable of several of yesterdays.

Oh, going waaaaaaay back, games are usually 3 core usage, nowadays theres a few exceptions, even then there was 1, so my guess/bet is 3 core, due to the consoles inability to free up more than that, which isnt the problem today, as none of us think itll use more then half its resources.
 
Lets go back to programming 101 here:

Task Manager on Windows' %usage statistic measures the percentage of time the System Idle Process is not running on a core. The System Idle Process gets run when no other thread can currently be executed. This could be due to a variety of reasons: Cache miss, lock, page fault, or whatnot, but regardless, at that current instant, no thread can run.

What this means, essentially, is that if the %usage statistics for a core never reaches much about 80% or so, the CPU core is getting its work done faster then the OS can hand off threads. In short: You have no CPU bottleneck.

Lets take a simple example here: A checkout counter [think a supermarket checkout here]. Each person at the counter can be considered a thread, and each counter can be considered a core. Now, say you have two counters and two people that need to be bagged. Simple example, but obviously, one person goes to each counter.

Ok, you just expanded, you now have eight checkout counters. Meanwhile, you still only have two people who need to be bagged. Now, its quite possible one of those two people has a MASSIVE amount of groceries, in which case, it makes sense for one of the other counters to help out by splitting the workload in two. Its also quite possible both people have very few groceries, in which case, the time needed to split up the workload will take longer then just using one counter apiece.

Hence the line I take: If the CPU is already getting all its work done, why decrease performance by forcing threads to specific cores?

What people like 8350, noob, and others want is essentially core loading, where each core does about the same amount of work. What they are all forgetting, is that if the cores are already getting their work done, doing this will not only NOT help performance, you will actually LOSE performance due to the overhead. But hey, Task Manager looks pretty.

So the "real" issue here isn't that "games don't scale", the issue is really that "games aren't doing more that uses these extra resources". That's the real problem.
 

8350rocks

Distinguished


HSA will be an integrated full scale feature of discrete HD 8000 GPUs due out Q4 2013/Q1 2014. AMD has already begun implementing this in discrete HD 7000 GPUs to a lesser degree. Sony capitalized on this when they chose the APU only because Kaveri will be the first full scale HSA integrated APU, and that technology could easily be adapted to Kabini.

SR FX series CPUs will be capable of working in tandem with HD 8000 GPUs at a full parallel scale, and you're going to see nvidia attempting to keep up some...but I think CUDA is going in a different direction somewhat from the vastly more supported openCL standard.

Intel is also going to be stuck in a bind having to adapt all these AMD instruction sets to their hardware so they can support any modicum of HSA, as there are currently 3 AMD instructions sets on PD that are not on IB or Haswell, and there will be more coming with the SR release. This will push intel's adoption of HSA even further down the road...meaning they likely won't have full HSA support until skylake, and by then AMD will have solidified a much greater market share position because of the advancements on the HSA front.

The issue is nvidia and intel don't want to pay attention to who the founding HSA members are...it's basically everyone but them. So while they try to stay on top and push their product, the world is seeing new standards emerge, such as openCL, and HSA formats. They're the companies right now that have a ship that's too big to turn in time. What they saw as "interesting diversions" while they had superior market share, will quickly show the world turning, and they will be stuck having to turn the cruise ship around while everyone else is sailing way off toward the horizon.

Now, will they catch up? Certainly, at some point, they may lag behind a few years playing catch up though. However, with coffers as deep as intel and nvidia possess, they will get there eventually...even if they refuse to see the writing on the wall now.

 

viridiancrystal

Distinguished
Jul 27, 2011
444
0
18,790
Nice benchmark; I hear Battlefield 3 single player is really stressful on graphics cards. However, I am fairly sure that we were discussing CPU's right now, so I do not see what relevance that benchmark holds here.
 
Making a 64bit game is very doable, and anticipation of discrete having 5 gigs or more worth of on board memory is already being seen.
Since tessellation is already implemented, as well as physics, these new consoles could very well use their resources, and again, if the big houses choose not to do so, for monies/profit sake, the little guys will, as well as some larger groups.

If the situation is only for dev ease having this higher ability/resource in their HW, competition tells me it wont last
 

8350rocks

Distinguished


No, the issue is, specifically, that games are not as parallel as they could or should be. Software lags far behind hardware, this is a well known and undisputed fact. Software needs to catch up to the hardware to utilize the optimizations that are available to it.

"Core loading" doesn't accomplish anything if something is bogged down on one core.
 

kettu

Distinguished
May 28, 2009
243
0
18,710

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Nope. It is a reflect of reality and of why eight-core chips from AMD did not excels with current generation of poorly threaded games.

From the link:

This (Sony) approach of more cores, lower clock, but out-of-order execution will alter the game engine design to be more parallel. If games want to get the most from the chips then they have to go 'wide'... they cannot rely on a powerful single-threaded CPU to run the game as first-gen PS3 and Xbox 360 games did.
 

Blandge

Distinguished
Aug 25, 2011
316
0
18,810


So your conclusion is that information straight from Intel is BS (Noting, of course, that all information leaving Intel publicly has to pass legal review), while citing evidence from what you consider to be incredibly reliable leaks from some Chinese website. Even if the leaks are legit (which is a big if), you don't know how early the silicon or drivers were that the leaked data was collected on. I'm willing to bet whoever ran those benchmarks probably doesn't have the best configuration for optimal performance.



Here's what he said:

Intel’s performance target for the highest end configuration (GT3e) is designed to go up against NVIDIA’s GeForce GT 650M, a performance target it will hit and miss depending on the benchmark.
Regardless of whether or not it wins every benchmark against the GT 650M, the fact that an Intel made GPU can be talked about in the same sentence as a performance mainstream part from NVIDIA is a big step forward.

Intel stole Nvidia's Apple business right out from under them with Iris. What more evidence do you need that Intel graphics can compete with the 650M. True the 650M might perform better, but performance isn't the only metric.
 

kettu

Distinguished
May 28, 2009
243
0
18,710


I don't understand how HSA clashesh with the idea of GPU cache? I doubt they are going to get rid of the local cache in GPUs. Piledriver cores have local cache that is not shared with other cores (L1) or is shared only within a module (L2) or shared among the whole chip (L3). What HSA is about is unified addresspace in the system RAM. As far as I know atleast.



You're oversimplifying the issue in my opinion. I mean is there a massive difference in this particular workload between CPU and GPU? Not all parallel workloads are created equal. Also on the other hand in situations where GPU is a bottleneck overall throughput is increased this way. If a game is performing at ~30fps at 1080p resolution on 7970/680 I'd wager that most people are going to be GPU bottlenecked. Also anyone with $400+ graphics is likely going to have a CPU strong enough to handle the load. So I think it's a safe statistical bet that the overwhelming majority of gamers is going to benefit from their design choice.

Edit:
Thanks juanrga. Looks like PC still has some life in it. Then there's the other major segment: consoles. And AMD is a player in both segments... There's got to be some nice profit in there somewhere.
 

8350rocks

Distinguished


Actually, considering intel is the king of synthetic benchmarks, they likely chose one specific best case scenario portion of the benchmark; since the results themselves were not based on a complete benchmark. Intel is notorious for this. What will happen, watch and see, is that intel will claim "similar to GTX650M performance" and will point to one specific scenario where they get similar performance. In reality, they will still be drastically behind Richland and when Kaveri comes out, they will be miles behind again.

 

Blandge

Distinguished
Aug 25, 2011
316
0
18,810


"similar to GTX650M performance*"

* In 3dMark11 as measured by Intel

And then when their customers design a product based on Intel's hardware, the performance will be based on the capabilities of the platform. I don't see anything dishonest about this.
 

8350rocks

Distinguished


How about the fine print that states the results were not based on a complete benchmark? Cazalan noticed it as well...
 

Blandge

Distinguished
Aug 25, 2011
316
0
18,810


Then you should continue reading the fine print to where it says, "You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products."
 

griptwister

Distinguished
Oct 7, 2012
1,437
0
19,460
Lol, this might tick a few people off, so I'm going to post it but my Ph II x4 840 (basically a Phenom II with out L3 Cache) can run Crysis 3 on "very high" settings at 1080P with my HD 7870 GHz card at Mostly 30FPS. Lol, I'm quite impressed. I do get lots a frame latency though at certain areas in the game. But it never drops below 27 FPS. Can't wait to upgrade. It gets irritating in BioShock infinite. Which, I sometimes hit 90 FPS lol. IDK about you guys, but the real world results are much different from benchmarks.

Just waiting on Kaveri... I really want to see how the CPU performs, I might just hold off till the FX SteamRoller line, but sadly, I'm Cheap.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


Right they want to show the 3x graphics increase by removing the Physics test from that benchmark. If you add the CPU test in it just brings the number higher, which would actually make AMD look worse because Intel is really strong on that test.

The Physics score is about 2x what the A10-5800k pulls.
i7-3770k (8131) vs 10-5800K (4116)

http://www.legitreviews.com/article/2047/5/

 
To the previous posters about Intel and it's size. The determination to break up a company is not based on it's market worth or the value of it's revenue but on it's effect on a market. Intel currently has the commodity computing (learn the meaning of that) market dominated with the only alternative being AMD. Everything else is HPC and specialized markets. This was the same question posed towards Bell and the reason they broke them up into different companies. Anti-trust laws prevents a company from forming a monopoly by buying all it's competition up, Intel is already close to that line via their domination in the market. Should they attempt to buy AMD the deal would never get approved through the FTC as it would create a monopoly.

IE: You can't go out and buy a SPARC / PPC for desktop work, their not even made anymore. You also can't go buy those for generic AD / Exchange / web / server work as they don't run windows nor RHEL.

-=Edit=-

Also why in the hell are we attempting to use the windows task manager to discuss consoles? Seriously now...
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


I just don't see what you're seeing. Maybe I am a bit too forward thinking but ARM proliferation is just rampant. They're estimating ARM core shipments of "17 billion in 2016". That's in 1 year!

These will be going into commodity computing as well as the vast majority of cell phones, tablets, and some laptops.

Less than 20% of the web runs on Microsoft, that leaves 60% for Apache. Apache runs just fine on ARM. The last big hurdle ARM had was 64bit computing and that's been solved.

Between iOS, Android, Chrome and other Linux derivatives WinTel is losing whatever dominance they had.
 

jdwii

Splendid


Fact? Site i read otherwise.
 

cowboy44mag

Guest
Jan 24, 2013
315
0
10,810


^+1 I have been saying the same thing for years now. I have a Phenom II 965 BE and am currently running HD 7970, and Crysis 3 at ultra settings runs smooth, and I also run BioShock Infinite at ultra settings without any problems. You ask any Intel guy and he will tell you that a Phenom II can't game. I have heard for years how much better 2500K is and how Intel bests every AMD processor, yet in the real world AMD does just fine. My 965 BE is due for upgrading after 4+ years of service, and so is the 2500K, so why is it so much better when they have enjoyed the same lifespan and were able to run all the same games at the same settings?

There are a lot of people basically saying that Steamroller won't be worth anything and will be blown away by Haswell, ect... But those same people never want to address the fact that a 4 year old processor from AMD is still running the newest games at the highest settings possible. Without a doubt I know Steamroller will be much better than my 965 BE.

There comes a point to where you have to ask how much improvement am I getting for all the extra investment. That is a question that Intel customers don't like to ask as usually they get ~10 FPS improvement for an extra investment of $200-$300 dollars (CPU and motherboard- lets face it your not going to spend the money for a high end i7 and be crippled by a cheap motherboard). And what are you going to do with that extra 10 FPS that the human eye can't even detect? Your going to take that benchmark test, post it all over the internet and boast about how much better you are for having that i7 LOL!!!
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


I got the data from here.

http://news.netcraft.com/archives/2012/09/10/september-2012-web-server-survey.html

And the company with the largest number of hosts (Amazon) is running 97.6% Linux. It would appear the bulk of cloud based services are linux and could migrate to ARM rapidly.

The days were small companies even need to buy a computer to serve their own web pages is virtually gone. Hosting services are so cheap right now.
 

jdwii

Splendid



Thanks man, weird that microsoft gained in the market during 2006-2008.
 


you can buy ARM laptops which blurs the line for personal computer. It will be of no interest for intel to buy AMD but I am thinking the monopoly laws won't affect intel much longer at the rate of due to ARM competition.
 


Makes me feel like slamming my head on a desk.

You have different class's and markets for computing. As enthusiasts we just build things but outside of our niche everything is offered as a "solution" and targeted at different market segments. Web servers are running apache yes, but their doing that on Intel hardware. Core enterprise services tend to run Windows, specifically AD / Exchange / Sharepoint as the three "must haves". Desktop services also run Windows, this is done to integrate with the Core Enterprise services previous mentioned. After that you have HPC and databases which tend to be split depending on how much money you have and what capacity you need. At the very bottom end you have homegrown MySQL with Linux, middle tier tends to be MS SQL, high end tends to be Oracle on SPARC and IBM on Power.

Commodity computing is referring to cheap disposable servers, primarily items from Dell, HP and so forth. Their cheap ($25K or less usually) and have a high cost to performance ratio and run cheap Commercial Off The Shelf (COTS) software. There is no custom design and integration work needing to be done. When something breaks it's often cheaper to replace the entire system then to try to replace the bad component.

To understand this people need to realize the true driving cost in IT is not hardware or even expensive custom software but the administrative man-hour requirements for managing those services. Anything that reduces manpower requirements can end up being a big cost saver in the long run. Take web services for example, a cheap webserver costs ~$5K maybe. You run that server for three to five years, so it's cost gets spread out over that period. How much are you paying the web developer, systems administrator and networking engineer to manage that server(s). This is how Microsoft sells their services, all their stuff integrates together and significantly reduces the manpower requirements to manage it. The Open Source world is desperately trying to catch up but there are simply too many distributions and too many people wanting to do things their own way. RHEL is pretty much the only entity that offers comparable enterprise capabilities and support, and even their answers were unsatisfactory when we were reviewed them.

Anyhow, Intel practically dominates the cheap server market which compromises the vast majority of core IT services in the world. External webservers running Apache are pretty much the only exception to this, and I believe that has to do with how badly IIS sucks. Friends don't let friend use MS IIS.
 
Status
Not open for further replies.