AMD CPU speculation... and expert conjecture

Page 208 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

+1 This is the end-all be-all answer to basically all the Intel fanboys on this thread and the reason for the thread in the first place.
 

cowboy44mag

Guest
Jan 24, 2013
315
0
10,810


You MISSED the most important word in my post: FUTURE. I'm not talking about the games that have been produced or the games in development right now. They are all (or at least the vast majority of them) set to the old standard, produced to play on the old outdated PS3 and 360 consoles and stuck utilizing mainly one or two cores. Once they start developing and optimizing for the PS4 and XBone, then we will see what the new generation of gaming brings. The PS3 and 360 consoles both utilize Intel processors, therefore all the games produced for them were optimized to run on Intel processors. Of course Intel gaming rigs run them a little better, they were optimized for the arch. The new PS4 and XBone are AMD based, and will produce games optimized for AMD arch. It will be the first time ever that Intel systems will have to run programs that aren't optimized for their arch but rather optimized for AMD.

Optimizing for AMD arch means employing at least 4, more probably 6 cores (like Crysis 3). In my opinion dual core gaming rigs will become as obsolete as wooden hubs on a car once the industry starts producing games only for the PS4 and XBone, with PC ports of course. I mean how well does an i3 play Crysis 3 when compared to the FX-8350? That is the future.:D
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810
With the feature enhancements AMD has released in slides for SR, you can expect SR (2 module) to act more like a true 4C/4T CPU than it's current 2C/4T CPU (FX-4350). The doubling of decode units does that.

To approximate integer performance you could take a FX-8350 and disable the odd cores (1/3/5/7). Add 10-20% to account for cache and clock speed enhancements.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Nope. A 15% was claimed for the CPU; this is a 3x more than the final thing. The focus on the GPU is irrelevant, because all users of desktop i5 and i7 will use dGPUs and most notebooks OEMs are using mobile dGPU instead of GT3e because Haswell graphics are power hungry, expensive, and slower than existent solutions.



Nope. It is lazy programmers who prefer single-threaded, because is easier for them. Lots of ordinary software tasks scale well and can go the multithreading route.



The FX-8350 outperforms the i7-3770k in multithreading tasks. It is more, the FX-8350 can outperforms the extreme intel series (e.g. i7-3960X) in some benchmarks. The problem of AMD has never been lacking performance but the ancient software (specially in Windows).



The same corporations that have security problems one day and the next, because are using Windows? Or the same ones that are storing their docs in a proprietary closed format controlled by Microsoft instead using some open doc format?



Where have you been the last few years? Toms game cpu hierarchy has been upgrading AMD multi-core chips because more and more games are using the extra cores. Today dual cores are outdated for gaming and next consoles will do quad cores outdated as well.



Nope. The PS3 was a general purpose single core (single thread) design assisted by a bunch of specialised coprocessors for some workloads. The current PS4 has general purpose 8-cores which can be assisted by the CUs (HSA design).

The Xbox 360 was a general purpose tri-core and each core supported 2 threads, but were not symmetric threads.

Difficulties with programming both consoles and the minimal common denominator rule is the reason why most current games are poorly threaded. As everyone knows this will change now that PS4 and Xbox One are eight-core consoles.



That is like saying that DDR4 "isn't going to help performance" if the data is not in RAM. Yeah! And a faster HDD is not going to help performance if the data is not in the HDD and needs to be downloaded from Internet using a low speed connection! LOL

The higher latency of GDDR5 is a myth. GDDR5 modules latencies are comparable to DDR3 modules. The myth seems to start with benchmarks made in DDR3/GDDR5 dGPUs. The higher latencies are due to 'cheap' memory controllers, which are not optimized for latency, because latency is not relevant for typical GPU workloads.



OpenCL and HSA are two different things. Toms have a review comparing both. No, HSA is not invisible to the developer. Applications need to be HSA enabled, when enabled, developers are finding up to 500% increase in performance.



A game as Skyrim (Intel optimized game) is an exception among current games, not the rule. But yes, if you play an older game that uses about the 50% of an i7, but only about the 25% of an FX-8350, the chance is that intel will be faster. Future games will be much more threaded (at least 3x more) and then the AMD chips will perform better than intel ones.
 


I did simplify the SPE example a bit, but at the end of the day, its boils down to 6 execution units which each have their own local memory. And SMT on the PPC arch is implemented differently then HTT on X86. 80-90% performance on the second execution unit is normal, which is higher then AMD's CMT implementation.

He's fixated on it because Crysis 3 is a game that totally broke his "games can't use more then 1~2 cores!" statement. Essentially Gamer was making the argument that the only performance that really mattered was single thread performance because programs are too hard to make work multi-threaded.

Understand what Crysis 3 did. The developers didn't take the engine and make it scale to more cores, they simply moved more work to the CPU. There's a difference between making the engine scale versus simply doing more work.

Crysis 3 is actually an interesting test case, per ananad:

GPUView1.png


Based on a quick analysis, it looks like one main thread, nine helpers, and the main render thread. So 11 threads doing significant (>5% load) amounts of work. But notice the periods where ALL of the threads are NOT executing? Thats generally the sign of some other bottleneck. In this case, the GPU (as expected; all games GPU bottleneck at max settings).

Compare that to Unigine:

GPView2.png


A more traditional two thread approach: the main execution thread, and the main render thread. And note the clear GPU bottleneck.

My point is this: No matter what you do CPU side, it DOESN'T AFFECT PERFORMANCE. Why? Because the GPU is the bottleneck. So either you have to move more work to the CPU from the GPU, or increase GPU power. Nothing else is going to push performance upward, because the CPU is spending ~75% of its time doing NOTHING.

Thats the issue. The lack of threading isn't the problem; the GPU being overworked is the problem.

The primary problem is this: rendering, AI, and physics tend to process faster on a GPU-like architecture compared to a CPU-like one, because they are naturally parallel problems. Trying to solve these problems on a serial architecture will reduce performance, assuming no bottleneck is present. Meanwhile, GPU performance, while currently maxed out, is increasing faster then CPU performance. So you start running into theoretical questions like "If we get 20% more GPU power next generation, does my GPU bottleneck go away?" or "Will I end up introducing a CPU bottlenck instead?".

For a general purpose PC, there is no right answer, because on some configurations, an approach like Crysis 3 WILL CPU bottleneck, where other titles generally won't. But with PCU's gaining maybe 10% performance per generation, and GPU's gaining at least twice that, if you had to be stuck with a performance bottleneck, which component would you rather it be? Nevermind, again, the case the CPU decides to do something else while your program is running. [Remember how many OTHER processes run in the backround, any one which can pre-empt you.]

This is relevent when discussing OpenCL, since OpenCL is basically offloading processing to the GPU from the CPU. For example, TressFX wouldn't be possible on CPU processing alone [too process heavy], but can easily take off 25% of your FPS [GPU overworked]. Hence, again, why I feel Agiea had the right idea with their PPU: A dedicated unit to handle advanced physics processing. I honestly think the end result is going to be a dedicated co-processor independent of the GPU. I honestly wouldn't be shocked if we ended up with something looking like HSA + Discrete, with the APU unit handling physics/AI, the dedicated GPU handling rendering, and the CPU driving the program. This would be the easiest way to work around the performance bottlenecks of the GPU, in my opinion.
 
Windows 8.1 looks to be HSA enabled and better support for OpenCL and the other interesting fud story I read is that the HD9000 cards will also be HSA enabled, GF and TSMC reports have shown that no production on a lot of silicon for various CPU/GPU SKU's had been discontinued for some time, basically AMD are selling out silicon before moving on, Nvidia are cannibalizing their GTX700 family by trying to sell GTX600 parts at better price points....sounds familiar right. Bonaire XT, Tahiti LE and Tahiti XT are the only low yield silicon available which is a sign that AMD are going to definitely release a new GPU line up in the following months.


Windows 8.1 is interesting, kinda curious if it will beter utilize my APU setup for notably OpenCL applications, moreover the new DX want to see if this aides perfromance.

Decided to do a new system build, when done I will post it in the TH user build log section.

Silverstone TJ08-E with a customized side window and rotated Hard drive cage.
BeQuiet Straight Power E9 CM 580W
BitFenix Alchamy Grey/Silver sleeved extenders.
ASRock FM2A85X Extreme 4-M
A10 5800K
Antec Kuhler 620 with BeQuiet Silent Wings Push/Pull.
Corsair Vengeance DDR3 2400 4x4GB
Sapphire HD6670 LP GDDR5
Cold Cathodes
Cable Furrowing
Deepcool 4 station Fan controller.

Total cost is around $440
 

Intel God

Honorable
Jun 25, 2013
1,333
0
11,460


8350 wont bottleneck a single GPU. Its when you get into SLI and Xfire is when it has issues


 


lol, I find it funny people actually believe that nonsense. The amd cpus already lose on a lot of multithreaded benchmarks, i7 2600k 3770k 4770k all beat the fx8350 on multithreaded tests in general. Lets not forget intel have 6 core cpus and next year 8 core cpus. So amd cpus need all 8 cores to be used to be nearly on par with a 4 core intel cpu. So in future it might be close but it won't beat it like you think. At best equal, most games slower currently though.[/quotemsg]


>implying these 6 or 8 core CPUs will be priced near the FX8350. The FX8350 can beat the 2700/3/4770K in a few PROPERLY THREADED games. Come to think of it, Intel fanboys went "ERMEHGAWD multi-threading is the future hurr durr derp hurrr durrrr" when the first-gen i7 came out (I still cannot find a reason to upgrade mine though) and now they are going backwards with "ERMEHGERD liek two apps use multithreading ermehgawd nobody uses Linux hurr durrr durr hurr". The FX8350, given the right threading can even slit your beloved 3960X's throat. That being said, for strictly gaming, the 3570K is not a bad choice. I see the 3770/4770K as a horrendous value for money. You also imply that those benchmarks are "fair". Simply put, the power of the individual core does not really matter now with the consoles being so similar to PCs and with a low power AMD 8-core. The power that each of those 8 cores can do is what really matters.
 

rmpumper

Distinguished
Apr 17, 2009
459
0
18,810


So? The point is that it is 8 core CPU, so the new games will be using at least 4-6 cores (depending on how many cores will be actually usable for games in PS4/X180). Thefore, AMD's 8 core FX/Steamroller will kick 4 core i7s in the butt just like in Crysis 3.

 


The thing is, when you get a top-end i7-920 system (AKA 3930K equivalent), power consumption has not progressed that much. Considering I am drawing 230W+@3.8GHz, the equivalent 4770K@4.2(Meltdown lol) is also at the same range. Bloomfield was a very good family, power consumption was not substantially increased from the Q9650 to the equivalent 930 while featuring a 10-15% improvement and a step in to more components on the CPU.
 

Intel God

Honorable
Jun 25, 2013
1,333
0
11,460


What butt kicking are you talking about :lol:

CPU_03.png


Thats even an old graph. I cant find the one with the performance after the crysis patch's
 


FX4300 TDP, LOLNO! A 4-core 4770K is already near the FX4300 and an 8-core will effectively double the power consumption to 180W-ish, at least. Once again, >implying that 8-core will not cost an arm and a leg. Shall I unleash the 16-core Opteron?
 

Intel God

Honorable
Jun 25, 2013
1,333
0
11,460


Agreed. I pull something like 179w in cinebench with a 4.8Ghz 4770K. An 8 core would be double that easily especially with 20mbs of Cache
 


*Gives internet cookie* Even in a SB-E world, the 8-Core @3.3GHz would have an (underestimated) TDP of 200W, not to mention other factors. A SB 2500K core has IIRC the same "TDP" as a Haswell/Hasfail 4670K core. My Bloomfield i7-920 on 45nm is pumping 220W+@3.8 as pointed above and my QX9770@3.8 is also at the same range. And Hajigur, the 4770K is not bad in power consumption, but runs hotter and is an overall lackluster improvement, thus allowing Steamroller to (easily) catch up.


 

Tuishimi

Distinguished
May 17, 2011
106
0
18,690
I asked this during one of the TH reviews... did they turn threading on in Skyrim? I have enabled as much threading as possible with Skyrim. Perhaps that is why it performs so well on my FX-8350 build... But I am not seeing any slowdowns and I play with a 120 refresh rate on my benq...
 


For the last time, nobody cares about power consumption, the FX8350 will still cost less than the 3570K or 3770K' INITIAL price over the average 4-year time spent operating it. There you have it, 40W+150W on a 6-Core = HURR DURR OH NO 190W TDP DURR HURRR DURR.
 


In order to kick something in the butt, you do have to be behind :p

But in all fairness, for the price that is a great benchmark for the 8350
 

8350rocks

Distinguished


This is what he's talking about...the Germans are the ones who discovered that Crysis 3 runs better on AMD in many cases:

Crysis-3-Test-CPUs-VH-720p.png
 

8350rocks

Distinguished


Did you live under powerlines as a kid?

Power.png


Power.png


Show me where it is that the APU consumes more power?
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780
Oh boy here we go. Intel guys screaming power consumption for desktops.

I will buy a 3770k right now and abandon my FX 8350 rig if you can prove to me that power consumption in a desktop rig has significant user benefits.

I've heard Intel guys rant and rave about power consumption, but I've never heard a single good reason why it's important in a desktop system. And I'm being generous and ignoring the fact that some of you seem to think that TDP is power consumption when TDP only correlates with power consumption. Which may be why you have such a hard time comprehending things, it's a common logical fallacy to fall into after all.

Also, I expect some sort of significant reasoning as to why even 200w would be something to fret over when you can make up that power consumption by swapping light bubls to LEDs of CCFLs, or that you (probably) have a refridgerator in your house pulling around 5,000w, window AC for about 5,000w, central air for about 10,000w.

I might just be ignorant of this whole thing though, so I was hoping you Intel guys could explain to me why even 400w in a computer would make a big difference? I seem to not be understanding the importance of power consumption in a desktop system, so I"m asking you Intel guys to kindly enlighten me about why it is so important. You guys do, after all, talk about it non-stop when given the opportunity, so I'm going to assume that you have really good reasons for bringing it up all the time.
 

Intel God

Honorable
Jun 25, 2013
1,333
0
11,460


Yup. I can care less about power consumption aslong as its not like 500W from the CPU alone
 

cowboy44mag

Guest
Jan 24, 2013
315
0
10,810


My bad, I'm not too big to admit when I make a mistake, when I looked up the processors It was late and for some reason I saw IBM and thought it said Intel. Your right, the consoled don't have Intel processors.

The second half of my argument though still holds. Games produced for mulit-core systems (think Crysis 3) are the wave of the future. There is no i3 that can play Crysis 3 like an FX-8350. Dual core gaming is still doomed.
 

Intel God

Honorable
Jun 25, 2013
1,333
0
11,460


2.9Ghz Haswell dual core plays Crysis just fine. Cant wait to see benchs of the 3.6Ghz I3 Dual core Haswell's

Test-Intel-Dualcore-Haswell-4570T-Crysis-3-pcgh.png
 
Status
Not open for further replies.