AMD CPU speculation... and expert conjecture

Page 352 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
I don't think a 2 fan 7990 cooler was the direction they wanted to go because they got hammered so hard about noise. Now, the cooler is quiet, but it throttles a bit...so they're getting hammered on that end.

Nevermind NVidia GPUs run ~90-100C under load with the cooler they use...but hey...it's quiet. So, I think that there's almost a double standard. NVidia GPUs can be quiet and hot, but AMD can't be quiet and hot...they have to be quiet and cool.

Whatever...they'll get it fixed with the non-reference coolers. For example, my Sapphire HD 7870XT is never loud, and I have the fan profile tuned to allow up to 100% fan speed...and my GPU is overclocked to 1100 MHz!!
 

Err, the 290 cooler sounds louder than my 1990's Model M, but then again, that is the level of a regular conversation or a GTX 470/480, which people defended more times than I can count, it did look sexy though and I would not mind having one....

 


Not necessarily, I am not a fan of Nvidia cards, but you still have a very good graphics card in your system. You will not get to take advantage of dual graphics, but using dual graphics when you have a more than capable gpu, just uses more ram then necessary.

Maybe in the future the APUs will be compatible with Nvidia GPUs for dual graphics, remeber when you couldn't have SLI on amd boards, I think that the APU dual graphics is a simmilar case.
 
i probably am late and i could be totally wrong with this but i noticed that purpoted picture of kaveri ES showed ZD on its name, which implies Zambezi (bulldozer). piledriver had PD or FD i think. so may the kaveri pic floating around is not real?

have i understood right?
 


Not at all. Part numbers are not like that.

The first letter is the family, like PD, BD, etc.

The second one is usually either D for Desktop or M for mobile.
 


Thank you. I had always assumed ZD stood for Bulldozer alone. Only now I realise ZD stands not only for Bulldozer but also for its derivatives.

Anyway I will tell you why I thought ZD is for Bulldozer and FD is for Piledriver. I saw a pic of the AMD FX 8300 and the first two letters are 'FD' and not ZD or anything else.

Link: http://technewspedia.com/cpu-amd-fx-8300-now-on-sale/
 


I think the point of hsa is for the gpu to lend a hand with cpu tasks it suited to so maybe not, this may need built into software so for all software available now then yes a pointless gpu.
 
AMD needs a blower design something like this:
HIS_IceQ_Turbo.jpg

H697QT2G2M_004_1600.jpg
 
shouldn't hsa be more or less invisible to the o.s.? if the implementation is invisible to the os, then the softwares shouldn't have issues running. my fear is that microsoft (i.e. windows) itself might get in the way, while free linux distros get very slow optimizations. except may be, steam o.s., should they choose to optimize for hsa.

@q6600: the 7950 cooler will have better performance since the card has fully perforated venting unlike the 6970 (top dvi port, i'm assuming) and the current r9 290.
 


Considering it allows 9x more draw calls than DX 11.2 with less CPU overhead...the increase could potentially be 900%. Though I can imagine it will end up being far less in terms of FPS, especially considering that theoretical gains are just that...theoretical.

Though, I could easily see a 50% increase if it's done well. So if you got 30 FPS you might get 45-50 FPS, if you got 60 FPS, you might get 90-100 FPS.

Obviously that's all speculation...but don't underestimate the potential there.
 


It should be; if implemented right, you can handle it in driver land.

The issue with that, however, is how programs/OS know what workload to load on what device; even if the work is more or less parallel, if you have a strong CPU/weak GPU, would you be better off sticking with the CPU? Putting some workloads on a slower parallel GPU will slow them down, after all.

Hence my worry that generation to generation, config to config, HSA will vary wildly in performance.

We don't know good mantle is, more specifically, how many more fps it gives over Nvidia ?

5 ? 10 ? 15 ? 20 ? 25 ? 30 ?

Probably a handful. The overhead realated to draw calls has been reduced significantly over the years, so in the grand scheme of things, we're talking *maybe* 1-2 FPS here. The time the GPU spends doing nothing due to the overhead of draw calls, in comparison to the 16ms screen display interval, is trivially small.

Point being: Optimizing draw calls is optimizing the wrong thing. You can improve performance 10x, but it won't be causing any significant effect to FPS by itself.

Considering it allows 9x more draw calls than DX 11.2 with less CPU overhead...the increase could potentially be 900%. Though I can imagine it will end up being far less in terms of FPS, especially considering that theoretical gains are just that...theoretical.

Though, I could easily see a 50% increase if it's done well. So if you got 30 FPS you might get 45-50 FPS, if you got 60 FPS, you might get 90-100 FPS.

Obviously that's all speculation...but don't underestimate the potential there.

You assume a workload of 100% draw calls. When draw calls account for <.05% of your total processing time however, a 9x increase won't have a bit of impact in terms of performance.
 
For those wondering what Draw Calls actually are and why they are important:

http://www.nvidia.com/docs/IO/8228/BatchBatchBatch.pdf (VERY old, but gets the point across)
http://stackoverflow.com/questions/4853856/why-are-draw-calls-expensive

In DX9, submitting a Draw Call forced a context switch, which is a VERY expensive operation to perform. Hence why DX10+ has a significantly lower overhead in this area (and credit to MSFT for fixing this).

The main bottleneck is a simple one: The GPU can render triangles faster then the CPU can submit them to the GPU. So if you do a lot of draw calls on a few objects at a time, you end up with a mostly CPU bound application. Essentially, the CPU sits around doing nothing except sending Draw Calls to the GPU, while the GPU is spending significant time sitting around doing nothing.

So when you look at GPU usage and see its only at 90% load, that's where the missing 10% is likely coming from. Lower that overhead by a factor of 9, with a game running a 45 FPS, and you get a maximum best case increase of 5 FPS.

So optimizing Draw Calls, while nice, isn't going to affect performance significantly.
 


That would be nice, were it not for the fact that many games made in 2012, and even early 2013 ran on DX9 typically. This is due to archaic hardware. What good is running a game on DX 11 if 80%+ of the GPUs out there are only supporting DX9?

Yes DX 11 is better...however, not that much better.

Additionally, you're still reducing the load on the CPU, which is typically a bottleneck before the GPU in most gaming builds.

 


LOL...so, one game, which has frame rate issues on both consoles, is your barometer for that?

I would call that poor engine optimization...personally...

The consoles are not "gimped" and I think PC Gaming is looking at a very good segment going forward.

Honestly, I think some of the guys trying to rush stuff out on the consoles for the holiday season are probably short cutting a bit...especially considering they are likely trying to put the game out on 2 generations of consoles + PCs.
 


I'm blaming all this crap on the Devs @CoD not being as good as everyone thought. Even TotalBuscuit was pissed off with CoD because his framerate would drop down to 40 FPS for no reason and he had to download a hack to get his FPS to go above 90. The game also had a lot of stuttering and it played horribly, and he has 2 Titans in his rig! I'm going to call BS that the CPU isn't powerful enough.
 
Status
Not open for further replies.