AMD CPU speculation... and expert conjecture

Page 207 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810


There are almost no general user, FOSS that use these fancy instructions. So your supposition is mostly theoretical.
And most of them give best perf under SSE3, and gain nothing from SSE4/4.1/4.2.


Maybe AMD should focus first on improving parallelism on the CPU itself, before going all HSA. The fancy instructiuons they add to processors is mostly unused. Maybe invest in hiring some devs to improve GCC auto-vectorizing. Wait, they dont have that much money.
 

cowboy44mag

Guest
Jan 24, 2013
315
0
10,810
Can I ask a really stupid question? When did this topic change from AMD Steamroller speculation to Haswell's worship page? I mean seriously this is supposed to be educated speculation as to what Steamroller will offer and how it will perform. I would like to see speculation of Steamroller compared to Piledriver and Bulldozer, not compared to Haswell. The moment Steamroller releases it will be benchmarked and directly compared to Sandy Bridge, Ivy Bridge and Haswell, so there is no need for all the Intel fanboyism trolling at this point in time.

We all get that Intel fanboys are really into performance per watt and saving money. What a joke!! By the time you build a high end Intel system you are easily at double the cost of a high end AMD system. It reminds me of my brain dead neighbor bragging to everyone that half his house is powered totally by solar grids. He paid $85,000 for his entire solar hookup, only powers half his property and so far has needed $2,000 worth of repair work in the 2 1/2 years its been there. But he is saving money!! He would have to live to be 120 before that system pays for itself!! You have the same thing with Haswell. By the time that computers performance per watt actually pays off in the electric bill when compared to the price difference of an AMD system that will be how many years down the road? Lets face it, Intel supremacists have upgraded two times by then so it NEVER pays off!! Every time Intel comes out with a new processor it requires a new motherboard... $$$$ on top of $$$$$.... Don't act like your trying to save money, you buy Intel you have money to burn plain and simple.

Does anyone actually have any Steamroller news? Or at least any AMD related topics that don't include Intel in any way? Really tired of coming here to get some Steamroller updates or AMD news only to read post after post after post about performance per watt and Intel. It makes about as much sense as going to a Chevy dealership with the express purpose of buying a brand new Ford!!
 

^This
 

os2wiz

Distinguished
Sep 17, 2012
115
0
18,680


The problem is the facts get in the way of your interpretation. Haswell only has a 5% and in some instances 6% improvement over IB. You are so fixated on single-thread performance you will never learn. In most top-notch new apps whether they be games or productivity apps multi-threading is becoming more prevalent. There are very few games or productivity apps I use that do not have mutli-core support. Only the mentally challenged use winrar over its superior winzip competitor. In the games realm SkyRim is becoming the exception to the rule. The Most popular games are going for multi-core support because that is the future. Most games are ported from console to desktop, not the other way around. Now that AMD has multi-core processors in all the new console designs the pc ports will support multi-core .
This was a major coup. In November at the AMD developers conference the fruition of all this hard work by AMD will be unfolded in their new Road map. Yes steamroller will be delayed, but at its release it will be a very explosive chip. It may well be the last dedicated cpu released by AMD, but that is only from major advances unfolding in their HSA apu architecture. The greater use of gddr5 memory in the graphics cards will be utilized for not only graphics but for many applications including Premier and Photoshop. This will negate the advantage that Intel has with their larger cpu caches. Adobe is committed to programming for the new HSA architecture as well as many other developers. This is the type of innovation that AMD excels at and that Intel, the monopolist, shies away from and fears.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860


such amazing idle power from hasbeen that its 5 watts more efficient than AMD's 8350 ... ya .. 5!

power-1.png


Amazing that intel put sooo much time, effort and money trying to reduce power consumption that their far superior than any other industry 22nm is barely better than 32nm AMD. AMAZING that Intel actually sucks that bad at thier 20x less power claim pre-hasbeen.

your so stuck on energy consumption that im willing to bet that you drive 40km/h every where you go because thats the most efficient speed, then again, you probably don't even have a car.

power-3.png


kinda funny, bit tech's overclocked numbers are nearly what xbit's stock is. I wonder who is lying to you, shouldn't be hard to figure out, one is a brittish web page.




perfwatt.gif


AMD is very good PPW on 7k cards aside from the ghz edition that wasn't designed for PPW but more on raw power. Even the standard 7970 competes with all of Nvidia's cards.

The titan lost value very miserably.

perfdollar.gif


combine the 2 charts and the 7750, 7850 and 7870 are top of the value both in the short and long run.
 
The problem is the facts get in the way of your interpretation. Haswell only has a 5% and in some instances 6% improvement over IB.

Which was expected, given the focus on the GPU, which is showing a good 25-40% improvement.

You are so fixated on single-thread performance you will never learn.

The majority of software tasks do not scale well beyond a handful of processors. I simply can not stress this point enough. And the stuff that does scale is going to be offloaded to the GPU via some API (OpenCL, etc), leaving the CPU with the "single threaded" software for the most part.

In most top-notch new apps whether they be games or productivity apps multi-threading is becoming more prevalent.

For the most part though (Crysis 3 aside, and I've made my points on that approach very clear), notice how AMD doesn't gain a performance lead. Why? Because the CPU isn't driving performance, and it hasn't been for some time now. I'd imagine a C2Q/PII could still pump out some surprising numbers if anyone was willing to test them...

There are very few games or productivity apps I use that do not have mutli-core support.

Just about every Windows application ever created has multi-core support. The scheduler, not the program, assigns threads to cores.

Only the mentally challenged use winrar over its superior winzip competitor.

Or those with significant license or security restrictions, which means 99% of corporations out there.

The Most popular games are going for multi-core support because that is the future.

We'll see, though I doubt it. More likely, we'll gradually see more OpenCL/CUDA(PhysX)/Directcompute offloading to the GPU, but I really don't see the CPU doing much more then its doing now. Still expecting to see 2-3 threads doing 99% of the work.

Most games are ported from console to desktop, not the other way around. Now that AMD has multi-core processors in all the new console designs the pc ports will support multi-core .

So did the last generation remember? Both the 360 and PS3 supported six simultaneous threads. In 2006.

...oh wait, we're still using 2 on the PC. That's right...Coding to metal allows significant performance optimizations you simply can NOT make on a general purpose PC.

The greater use of gddr5 memory in the graphics cards will be utilized for not only graphics but for many applications including Premier and Photoshop. This will negate the advantage that Intel has with their larger cpu caches.

Not always. Remember that cache is only a hack to improve to (relatively) poor memory throughput. If the data you need is not resident in RAM, then you still need to go to the HDD, so that GDDR5 isn't going to help performance any. Native 64-bit apps would offer better performance improvements then GDDR5 would. Nevermind the latency concerns you would encounter with the higher latency of GDDR5 compared to DDR3...(I can't stress this enough: Windows was NOT designed to handle high-latency memory access, and I can't help but suspect it will not play nice...)

Adobe is committed to programming for the new HSA architecture as well as many other developers. This is the type of innovation that AMD excels at and that Intel, the monopolist, shies away from and fears.

"Coding for HSA", which basically means "OpenCL". HSA is invisible to the developer; its all handled on the OS side of the house.
 

cowboy44mag

Guest
Jan 24, 2013
315
0
10,810


If all that crap was totally and completely 100% true that why waste all the tech going into the new PS4? Why not just put a cutting edge GPU, increased RAM, and old outdated tri-core six thread cpu in it? Lets face it, if Sony or M$ can save a penny here or there they will jump at it. Using old outdated tri-core processors would be much, much cheaper. And those old processors have better single core execution than the underclocked 8 core processor of the PS4.

If single thread power is going to be much more important than multi-thread than why would Sony and M$ both go with 8 core AMD processors? If single thread power is sooo very important they would have been much better off going with older Sandy Bridge style processors that have great single thread execution. But they went with 8 core AMD processors which don't have very good single thread execution.

Even though console systems are easier on resources than their desktop counterparts (programming wise) new games produced for the PS4 are going to have to be heavily multi-threaded to take advantage of the hardware's strong suits. The old consoles, are both running Intel processors, which is why single core execution is still important to video games right now. Two three, four years down the road multicore and 6 to 8 core systems (12 to 16 threads) are going to be what is important. Lets not forget the old consoles had only 3 real cores at maximum, the new consoles are sporting 8.

That is the natural progression of new technology. Something new and revolutionary comes out, the old known tech outshines it for awhile until the "bugs" are worked out and before long the old tech ins't used anymore. Take a look at the first fuel injection systems. Four barrel carb systems were blowing them out of the water when they debuted, now all cars and trucks are fuel injected. New technology and newer better ways of doing things always take awhile to catch on. We have just started to see the very first examples of multi-threaded games, there will be many more to follow.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


AMD has 2nd quarter earnings report July 18th. They are in a quiet period now but we should hear news then.

The only real leak I've seen in the last month are the codenames for Kaveri, which is Spectre and Spooky. With a similar range of TDP (17/25/35/65/100).

http://semiaccurate.com/2013/06/18/a-glimpse-of-future-amd-graphics-offerings/

The main thing to conclude from that is AMD is using all of the die shrink (32->28) to increase performance instead of power usage. Those TDP are identical to existing Richland. Probably a good thing as they need a performance boost.
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780


I don't get your fixation with Crysis 3. They offloaded the work from one thing that was bottlenecking performance (the GPU) and placed it on something that wasn't bottlenecking (all those extra CPU cores that are sitting around doing nothing).

If you have a bottleneck, you move things away from what's bottlenecking and put them on things that aren't bottlenecking.

If you're being bottlenecked by the GPU, you don't keep giving the GPU more work to do and giving the rest of the system less work to do, that's not efficient use of system resources at all. Ideally you'd want the entire system at 100% utilization as that'd mean there were no specific bottlenecks and the system was running at optimal usage.

Yes, I realize you think you're some sort of awesome game developer and your mind is exploding because they did something non-traditional, but the bottom line is that they relieved the GPU from some work when the GPU is bottlenecking the system, which is what you want to do to maximize performance as it frees up the GPU to do more work.
 

cowboy44mag

Guest
Jan 24, 2013
315
0
10,810


:D Thank you!! Its nice seeing Steamroller / AMD news on this Steamroller thread!! :D
 

8350rocks

Distinguished


AMEN!

I would really like a "squelch" or "ignore" feature on this forum...of course, half the posts on the past 2-3 pages would likely have been "squelched" at this point...LOL.
 

cowboy44mag

Guest
Jan 24, 2013
315
0
10,810


It would sure be nice to just get the info about the actual tech than hear a bunch of fanboys trying to justify their purchases. Not to get off topic but it reminds me of the custom revolver fan club at the range I go to. Their handguns cost much more than my Ruger and they expect for the money they spent that the bullets will magically float to the bullseye. When I outshoot them they get "pissy" and then they really get "pissy" when I pull out a box of Buffalo Bore +p+ because if they could fit it in their revolvers it would blow up in their hands. But their's has to be better because it costs more :pt1cable:

Back to topic I have a feeling that half of the Intel fanboy "experts" don't realize that the performance advantage that Intel processors basically comes down to a few FPS gaming where its such a small difference the human eye can't even detect it, and processes that complete 5-10 seconds faster. When they do realize that for all their extra investment that is all the performance "gain" they got over top end AMD then they turn to bar graphs and synthetic benchmarks which makes the small gains look much larger than they are.
 

jdwii

Splendid


in 80% of the games yes but sometimes its like 30% slower(and yes dips below 60FPS), i own a Amd CPU and i'll most likely continue as long as they offer the best price/ performance CPU. But i stick to what i say a 8350FX is clocked at 4.0Ghz and a I5 ivy is clocked at 3.4Ghz(17% lower) and Ivy is still at least 10-20% faster in Single core performance which is why i come to the conclusion that Amd is 30% slower in performance per clock. Which is extremely important in lots of areas and nobody is going to convince me otherwise. Amd knows this is true that is why Steamroller is going to focus on improving single core performance.

If Amd improves Performance per clock by even 15-20% they will be even with Intel with their current clock rates and if priced right will kill them in Price/Performance.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


Trying to stop fanboyism is like trying to stop professional sports. It's ingrained in our culture to pick a favorite team, root for them, and live vicariously through them. Why grown people do this is beyond me, but that's the world we live in. It's encouraged by the mainstream media from cradle to grave.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860


That invalidates the arguement. He has stated before that a bottleneck must exist on the gpu otherwise its just a crap program rather than a program designed to take advantage of ALL resources. It also makes an Intel I3 look like poop wich goes against the agenda of keeping software "low end friendly".

As I have stated, once 4k, and much less 8k starts to show up at a resonable price (~2 years or less, off brands are already down to less than $1000 here), the gpu is going to be working extra overtime without any off-loads just to hit 30fps.

This is 3k resolution :

metro_lastlight_5760_1080.gif


4k is going to be murder on even tomorrow's gpus.

8k right now your probably looking at single digit fps on the lowest settings, but that resolution is reserved for 85" and up currently.
 

8350rocks

Distinguished
Well, GPU utilization in Crysis 3 is extremely high already though, for the original argument to hold water, you'd have to be able to prove the 90+% GPU utilization encountered by all cards that run it is somehow not "efficient" while offloading the rest onto the CPU.

Crysis 3 makes top end gaming rigs cry like a little girl. It just is what it is...GPU's are not given a free pass on that game...the CPU certainly isn't either.
 
If all that crap was totally and completely 100% true that why waste all the tech going into the new PS4? Why not just put a cutting edge GPU, increased RAM, and old outdated tri-core six thread cpu in it? Lets face it, if Sony or M$ can save a penny here or there they will jump at it. Using old outdated tri-core processors would be much, much cheaper. And those old processors have better single core execution than the underclocked 8 core processor of the PS4.

Remember that integrated hardware is NOT the same as a general purpose PC. You can code a LOT lower level, in which case you can make all sorts of optimizations you can't on a PC since you can basically guarantee execution timings down to a few ns or so. You can make assumptions about what data is in the CPU cache, specify exactly where contents are in RAM, etc. Its those optimizations which have allowed the 360/PS3 to hang around LONG after their hardware (particularlly the GPU) was obsolete.

Point being, because you have a SINGLE hardware spec, you can, if you needed to, force feed the individual CPU cores raw X86 assembly instructions if necessary, without having to worry about messing up the global memory state. You have VERY fine control over performance characteristics of the system. So it makes sense to go overkill on resources, especially since Sony/MSFT want to have the consoles hang around for a decade. [Remember, the GPU's will be considered slow in 2 years, and obsolete in four, if the rate of GPU improvements holds.]

If single thread power is going to be much more important than multi-thread than why would Sony and M$ both go with 8 core AMD processors? If single thread power is sooo very important they would have been much better off going with older Sandy Bridge style processors that have great single thread execution. But they went with 8 core AMD processors which don't have very good single thread execution.

Price? Power? The number of parts ready to be shipped? Bribes? (Hey, stranger things have happened).

Lets not forget the old consoles had only 3 real cores at maximum, the new consoles are sporting 8.

No.

For the last time, the PS3 had a single PPE with 8 SPE elements, one reserved for the OS, one disabled due to yields, leaving SIX SPE elements for developers to play with. The 360 had a tri-core CPU with 2-way SMT support, enabling up to 6 threads to be executed at a time in hardware.

I just laugh when people don't know what they are talking about, and repeat the arguments made back in 2006 when the 360/PS3 hardware started to get leaked.


I don't get your fixation with Crysis 3. They offloaded the work from one thing that was bottlenecking performance (the GPU) and placed it on something that wasn't bottlenecking (all those extra CPU cores that are sitting around doing nothing).

You just broke the fundamental rule of PC's: You, the developer, can not make any assumptions about resource usage.

Look at it this way: Crysis 3 produces the same FPS between SB, IB, and FX. Why? Because the GPU is the bottleneck. So you argue, "OK, we'll offload some of the work to the CPU, because it isn't doing anything."

Now, look at IB's case: You have four cores close to saturation (~80% all four cores). Essentially, the CPU is barely keeping up with its tasks. Now your AV scanner decides to run.

Crap, you now have a massive CPU bottleneck, your GPU is sitting around waiting for the CPU to catch up, and performance just fell off a cliff. And the performance loss will be a hell of a lot more then simply letting the GPU do its job,

And before anyone complains this isn't realistic: Its an example. How many people here have been going on about how they want to play games while they do encoding in the background? Same concept: In Crysis 3's case, if any other application takes away a core, performance plummets. Thats why the approach of making the CPU do more work is wrong.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860
^^ so your saying is that the end-user is the reason you can't program better? because they might do something that the pogram wasn't intended to be used for ... and the only solution is to make the progrom idiot-proof?

Id go the other approach, program for efficiency and if the end-user plays crysis 3, bf3, and metro 2033 all at the same time. well thats his own stupidity, don't dumb your program down just incase someone wants to do something stupid.

then again, stupid is what holds the entire industry back.

Edit: what if someone decides to install linux or windows on the ps4 and then can't get 2 games to run simultaneously, is that sony or the game developer's fault?
 

lilcinw

Distinguished
Jan 25, 2011
833
0
19,010


I wonder how long it is going to take someone to figure out how to turn these things into steam boxes.
 


Well, the SPE's are not general purpose CPUs per sè, so you don't really have "6 cores" in the PS3. I get your point, but in the particular case of the PS3, it's kind of over simplified, my friend.

Other than that, the SMT in the XB360 is close to what you get with HT in desktop CPUs I'd say, so not *really* 6 cores. More like 3 heavy duty cores and 3 "spare parts", haha. Yeah, yeah, just bare with me on that :p

Now, the PS4 and XB1(80) will have 8 *real* cores. Slow ones, but real "heavy duty" cores. That is certainly different from what we had previously (even N64, PS1, XB, etc, etc).

I'm just waiting to see what the new engines will come up with in terms of CPU+GPU usage, because, like we argued/discussed some time ago regarding RAGE's ID Tech 5 idea, I for one, applaud the "hey, this is under utilized, let's use it" approach. In that aspect, I imagine AMD has a good upper hand on how to optimize their stuff for games going down the road. In particular, and I really hope so, starting with SR's Kaveri.

Cheers!
 

griptwister

Distinguished
Oct 7, 2012
1,437
0
19,460
A little piece of interesting for you guys...

http://archive.benchmarkreviews.com/index.php?option=com_content&task=view&id=962&Itemid=63&limit=1&limitstart=6

Also, I thought you guys might like to hear, I recently helped a friend build my dream rig (well, almost dream rig, just replace GPU with 690). He has a FX 8350 on a ASUS sabertooth R2.0 Gen 3 board and a GTX 770. I see no bottlenecks... in any game he plays. I'm thinking about letting him use my origin account and playing crysis 3 on the highest possible settings. He gets like, 120FPS on average with WoW. Which I know WoW isn't a demanding game, but it's amazing none the less. I see no need for anything more than a 8350 for gaming especially since its $100 cheaper than an i7 and it doesn't bottleneck the 770.
 


He's fixated on it because Crysis 3 is a game that totally broke his "games can't use more then 1~2 cores!" statement. Essentially Gamer was making the argument that the only performance that really mattered was single thread performance because programs are too hard to make work multi-threaded.

A while back I made the statement that the era of "all you need is a dual core" is officially over. Tek syndicate also released some info that pretty much had the fx8350 winning lots of high end gaming benchmarks vs the "favored CPUs", especially when you started to do live-streaming. They broke out of the sterile "clean room" environment and actually bench-marked how we play and got different results the other sites. This was similar to what happened when BF3 was bench-marked in multiplier with large maps vs single player mode.

Basically there are many things you can use additional processor resources for and developers are starting to develop for that. Opinions and sport team loyalty will not stop the progress of technology. That is what makes SR exciting, AMD is getting over the initial R&D curve with it's new uArch and tweaking it / tightening it down. Lowering latencies, lowering power usage, finding bottlenecks and inefficiencies and tightening them up.
 

Tuishimi

Distinguished
May 17, 2011
106
0
18,690
I play Skyrim on Ultra+ settings (plus being extras) with tons of mods on my FX-8350 and my Nvidia 770 GTX. I see no difference between that and my previous 2600K + 670 build.

[edit]

Play at 1080 P.
 

cowboy44mag

Guest
Jan 24, 2013
315
0
10,810


^+1 I don't understand how anyone can't see the future of video gaming is multi-threading with real cores. Dual core "gaming rigs" are a thing of the past not the future. The future is 6 and 8 core systems. I can't wait for Steamroller FX!!:D
 
Status
Not open for further replies.