AMD CPUs, SoC Rumors and Speculations Temp. thread 2

Page 33 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


It does show gains though from DX11 to DX12 (very minor) which shows that DX12 is cutting the CPU bottleneck that the FX 8350 does impose.

However I was never expecting their CPUs to benefit as much as the GPUs could.

DX12 though is more about being able to more fully utilize GPU performance while providing more realistic graphics.

Interesting though are the 1080P Medium results where the CPUs become more of a bottleneck.
 
Here's the problem I have though: What happens when, after a developer has gone through the work to optimize the game code to maximize a specific GPU architectures performance, when a new generation of GPU comes out with a different hardware design, and performance actually drops? Because guess what? You can't hide these problems with driver optimizations anymore; it's all on the game developer to update their software, potentially YEARS post-release.

So I'm worried you're either going to get a lot of games which run worse after several years, as assumptions made during coding are no longer true, OR that both NVIDIA/AMD basically re-use the same GPU design for years until the next wave of consoles forces a hardware refresh.
 


That's where newer cards are going to be faster and simply chomp through old code bad code with ease. for example my 7970 gets 600 fps on halo 2 for pc despite the fact it hardly even can understand that code and with proper code would hit an easy 2000 fps. try to play those old games again with new hardware you will see you don't need optimizations it just plays it. and all you need is 30. a number that most hardware had issues getting back in the day has so much more power than the old cards it just destroys fps.
 


Well I think the latter is more likely, although more in terms of they'll keep the base architecture the same and incrementally improve it, rather than not improve. Both AMD's GCN and nVidia's Kepler / Maxwell designs keep a lot of commonality between generations. Usually there are a couple of improvements that are on top of the basic design and *more* resources rather than radical changes, similar to how cpu's develop (a new Intel CPU usually still runs old code as well / better than the last gen core, why can't this work for GPU's as well?).

What I do think it highlights, whilst it doesn't put the old FX 8350 on top, is that with DX12 it's still keeping pace to a comfortably playable level whereas in DX11 it was falling much further behind (especially when you look at the results with the Fury X- 30 fps jumps to over 50?!). No it doesn't put the FX in front, but then we are comparing a very old 32nm processor against Intels latest there. I'd have been more interested to see a Kaveri or Carrizo result thrown in too...
 


I don't think you understand how poor utilization of hardware can easily cost you half performance. Trust me, I've done it before. When dealing with hardware, suboptimal use can start costing you performance very quickly, as the various pipelines grind to a halt, each waiting for another to finish. Or can current GPUs just "chomp through" badly optimized game code like Arkham Knight? That's the type of performance I'm worried is going to become NORMAL going forward.
 


in 5 years our gpu's will be roughly 2 times faster. look at a 580 and compare it to a 980...

back then we thought 1080p gaming was insane (like 4K now)

you watch even the mess that is Arkham Knight will play with 100 fps (gimpworks disabled) on ultra 1080p maybe even 4k.

right now not even a 980ti can play AK with gimpworks enabled.

This is gpu argument gamerk316...

 
you watch even the mess that is Arkham Knight will play with 100 fps (gimpworks disabled) on ultra 1080p maybe even 4k.

Not on my 770 GTX, that's for sure. Seriously, "even a 980Ti"? You are aware 99.9% of the market runs on GPUs worse then a 580GTX, right?

Secondly, like CPUs, GPUs are beginning to run out of room for die shrinks. And again, what good is 50% theoretical performance if poor software optimization costs you 50% actual performance?

Thirdly, this also applies to APUs, so this discussion is fully in scope.
 


I have a 980Ti and in AK maxed out at 1080P with Gameworks enabled I am doing 60FPS happily so yea.....

That said, what this shows me is that AMD currently has a very big CPU bottleneck that will get lifted with DX12 if coded properly. Remember AoS is a RTS and RTS games still will be very CPU bound due to all the AI on screen. A more GPU bound game might still draw a very different picture.

Still a long way until we see what DX12 will really do and if it will help AMDs CPUs stay somewhat viable until Zen arrives.
 


Eh your pushing it as we are discussing full blown GPU's.

My bet? amd APU in 5 years will have 800 SPU's in 2010 we had ~~300 40nm cores die shrink to 32nm gave us ~~ 400 cores 28nm gave us ~~500 cores. flowing that trend we will have 600 16nm cores and 700 10nm cores. I assume Zen cores to be physically small and with smaller nodes allows for more gpu space. giving us a decent chip size mm wise and 800 SPU's with 4 cores and 8 threads. accounting for process improvements I assume PS4 to slightly faster GPU with skylake 6700K cpu. can that play AK in 1080p on low settings without optimizations (current situation). I think so.

you said 99% of market has slower than 580 market. true. of the AAA title market I would bet its more like 20%. I have no data.
Look at gamer market... much better looking than that. you know a 950 is faster than a 580 right?

GIVEN the above assumption we could assume that a mid range gpu (lets say a R9 680x) would then have 4000 SPU's and 6 GB hbm gen 3. can a (currently un-optimized fury x on LN2) play AK? with garbage drivers? we already have un-optimized game and piss poor drivers so I say without a doubt YES. there is no way that 90% of gamers wont be able to play AK. A HORRIBLE example of todays games.

Look at games they will actually play a lot like GTAV or Star citizen then I say we have nothing to worry about. Graphics cards have not been following moores law like cpu's have.
 


Haswell has 2x 256bit units. 512bit vectors are introduced in the new AVX-512 and are also used in the Phi line for HPC.

Max. throughput
Excavator: 8 FLOP/core
Zen: 16 FLOP/core
Haswell: 32 FLOP/core
Skylake Xeon: 64 FLOP/core
Phi Xeon: 64 FLOP/core
 


Not very different from 99.99% of users don't needing a 8C/16T HEDT CPU and being happy with a 2C/4T laptop APU for their usual tasks.
 


for simple computing a FX 8800p is excellent and that cpu is widely known as quite weak. most gamers even use 4T systems.

do servers use these pipelines? whats the real world use of a 512bit fpu other than a AVX2 benchmark
 


AMD64 launched in 2003-2004 depending on paper launch date, or hardware launch date.

Windows 10 is still not a completely 64 bit OS 12 years later.

Most modern programs default to some set of extensions in the SSE4 series of extensions for x86, without recompiling, you literally see no improvement in performance based on ISA advantages in code structure from the age of Core2 and Phenom all the way through to broadwell. All those performance improvements in household software come strictly from IPC improvements/die shrinks/more resources.
 


What do you mean by "not a completely 64Bit OS"? DO you mean because it has a 32Bit distro? Or that it is running on x86-64? Because Intel tried to push 64Bit and phase out 32Bit but no one wanted that due to no backwards compatibility.

And I wouldn't say it is not a completely 64bit OS. It supports 64Bit in multiple way, hell it even supports Itanium which is pure 64Bit. However the OS does not fully utilize all the features due to the software not utilizing all the features or them needing them yet.

That said, software is very slow. If they would actually pick up the pace and utilize some of the more advanced sets we might see much better improvements. Thats why when people say Skylake is not worth it over Haswell, they are right for most software. But software that utilizes the newer extensions will be multitudes faster due to that.
 
Well, games are starting to push 64-bit because the 2GB address space limit for 32bit exe's is getting just a little long in the tooth. And ironically, people are now complaining about games happily eating 12GB of RAM at any one time. Memory leaks are going to be FUN going forward, as they happily eat away dozens of GB at a time, rather then just crashing the app once they go above 2GB.
 


As stated before AVX2 deals with 256bit vectors. Haswell/Broadwell have 256bit SIMD units. I don't know any general use for servers. I know usage on crypto, graphic rendering, and scientific/technical compute workloads (workstation and HPC). I think AVX2 is also used in some chess game.
 


so essentially there is very little to no real world use for AVX2 for desktop cpu's. crypto is its own miners any more chess game will be used by one university somewhere for a demo of something that's not helpful to real world but proves "computers are smarter than ever before". and graphic rendering is done by gpu's more and more.
scientific technical computer workloads will be super computer types of workloads correct? so possible design loss there. and so what? the lose 1% sales for this design choice and save some space on the die for other things like southbridge imo
 


There is very little usage for the average Window desktop user, but there are applications for workstations. The graphics rendering that I did mean is not made on GPUs.

So AMD consider that AVX2 is popular enough to give support on Zen, but the support is made in a weird way that doesn't provide benefits over AVX. A Broadwell/Haswell user that upgrades AVX workload to AVX2 workload can see performance increased by up to 70% (clock-for-clock). A zen user that does the same will see the performance unchanged. Makes one wonder why AMD always takes this class of weird decisions.

Moreover the game changer is AVX512. This is the ISA supported by both Skylake Xeon and KNL Phi. The new AVX512 is what will crush AMD GPGPU and HSA business.
 


From my perspective, AMD is right. As far as I'm concerned, a "core" is merely a unique register context. By that definition, an i3 is a Quad core though, but until someone defines what makes a unique Core, it's the best definition I've got to work with.
 


Except that AMD also refereed to this as what it is, CMT. It is an advanced version of SMT.

While I think the lawsuit is BS, like all lawsuits which come mostly out of the crazies of California, I do think that they should have never marketed it as a 8 core CPU. They should have just marketed it as a 4 core with CMT or their own name for it much like Intel advertises their CPUs as 4 cores with SMT/HT giving it 8 threads.

That is my personal opinion. I highly doubt AMD will spend the money it would take to dismiss this case based on petty stuff. More than likely they will settle if it goes to court so save the lawyer fees.
 
Yeah, it was a bad marketing choice however you slice it.

'Intel quad core faster than amd 8 core' has been the message ever since it released, when really the module concept is loosely comparable to an Intel core with ht (certainly similar in terms of size, number of transistors and so on).
 
Status
Not open for further replies.