AMD CPU speculation... and expert conjecture

Page 628 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

jdwii

Splendid


If anyone thinks tablets or smartphones matter for hardcore gaming are kidding themselfs its pure casual and i doubt the games that sell the most are the triple AAA ones IF they exist. Think if they did exist linux would be getting more AAA titles since OpenGl would be that much easier to port over a game(not real easy still)
 


The 970 looks really nice at the price. I doubt you would really see much difference in heat tho. Unless you are loading the card 24/7 or something.

I doubt AMD has anything more power efficient until 20nm tho. AMD will probably get on the next node first while nvidia is bring the changes now and will have a bigger buffer room until they need to go down.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


AMD isn't selling any cores. They're selling fully designed and fabricated semi-custom APUs/SoCs. The manufacturing restrictions were worked through when AMD went fabless. Zero difference with what they're doing with ARM and X86.

The XBox One SoC showcases their integration capabilities with 15 custom cores in addition to the 8 Jaguar cores and the 14 GCN cores. That's 37 cores in 1 SoC.

 


Corrected that for you! AMD were given (by order of IBM when Intel got the contract to guarantee supply) the original Intel 286, 386 and 486 cores (which AMD improved- they were selling better versions of Intel's own designs in the beginning!).

After the 486, Intel managed to renegotiate the agreement so that they had to share the ISA with AMD *but not the core design* which is why AMD fell behind with the first Pentium, requiring a couple of generations to perfect their own designs before catching Intel again with the first Athalon.....
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


That didn't address my point. I found this other related link

http://www.xbitlabs.com/news/cpu/display/20130613234600_AMD_Shares_Secrets_How_APUs_for_Next_Generation_Game_Consoles_Were_Developed.html

Then one of the comments in the article just brings the same point I am doing. I copy and paste:

AMD in this case operated more like a design/build building general contractor would, while providing some customization services as well, through AMD, other third party contractors, and AMD's fab partners! AMD's APU IP part of the equation is AMD's IP to do with as it wishes, sans the M$ and Sony and Third party IP! AMD can offer, with its x86 license, a similar customization service, but not the exact method that ARM does with its IP, as the x86 part must be done by AMD, because AMD only has a restrictive use of the x86 license with Intel!

My point is that with future ARM-based products, AMD can offer its customers full customized solutions: both CPU and GPU.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
It is interesting to watch how some morph words over time: The original discussion was about gaming. When corrected they changed to "pc gaming", when further corrected they changed to "AAA gaming" when corrected again they change it now to "hardcore gaming"...

My point remain: DX12 is only for minority OS named Windows (and the market share will be much smaller if as some of us suspect DX12 is only coming to Windows 9). Game developers will rely on alternatives such as OGL-Next (and Mantle).

Oops I forget to mention this link with AMD, Nvidia and Intel promoting development of games on OGL

http://blogs.nvidia.com/blog/2014/03/20/opengl-gdc2014/
 


Out of interest I've been following the development of a cross platform RTS game that is using Open GL.

Their lead graphics developer (with his original experience of Direct X) wrote an interesting presentation on the subject of his experience moving to Open GL. He also highlights a few weaknesses with the current iteration (which hopefully OGL-N will resolve)...

https://www.dropbox.com/s/qjfgktemcomfriw/Optimizing%20OpenGL.pptx?dl=0

Sorry if this is a little off topic but thought it would be interesting. Also it's interesting to note the differences between AMD and Nvidia opgen gl driver implementations (AMD essentially stick rigidly to the spec, Nvidia sorta do their own thing which may or may not be helpful)...
 
First official Maxwell reviews:

http://www.extremetech.com/computing/190463-nvidia-maxwell-gtx-980-and-gtx-970-review

Two things worth noting:

The Nvidia GTX 980 is the fastest, quietest, and most power-efficient GPU available on the market today. It matches or exceeds the R9 290X in virtually every area where the two meet. Even compute performance — the one area where GCN always enjoyed an enormous advantage over Kepler — is now embattled ground.

And more relevant to recent discussions:

AMD fans will point to Mantle as one reason to stick with the company, but vendor-specific features rarely drive new customer sales. From Mantle to PhysX all the way back to 3dfx’s Glide API (which was losing market share even before the company collapsed), proprietary vendor features tend to appeal to a core group of faithful enthusiasts. The Never Settle bundle may help somewhat, with multiple high-profile titles shipping this fall, but that’s not the same as offering the best price/performance ratio in the business — and AMD knows it.

EDIT

More reviews:

http://www.pcper.com/reviews/Graphics-Cards/NVIDIA-GeForce-GTX-980-and-GTX-970-GM204-Review-Power-and-Efficiency
http://www.techspot.com/review/885-nvidia-geforce-gtx-970-gtx-980/
http://techreport.com/review/27067/nvidia-geforce-gtx-980-and-970-graphics-cards-reviewed

Looks like NVIDIA focusing on mobile chips first is paying off big dividends. My concern now, is can AMD catch up, or will it be hobbled by GCN?
 
Out of interest I've been following the development of a cross platform RTS game that is using Open GL.

Their lead graphics developer (with his original experience of Direct X) wrote an interesting presentation on the subject of his experience moving to Open GL. He also highlights a few weaknesses with the current iteration (which hopefully OGL-N will resolve)...

https://www.dropbox.com/s/qjfgktemcomfriw/Optimizing%20OpenGL.pptx?dl=0

Sorry if this is a little off topic but thought it would be interesting. Also it's interesting to note the differences between AMD and Nvidia opgen gl driver implementations (AMD essentially stick rigidly to the spec, Nvidia sorta do their own thing which may or may not be helpful)...

NVIDIA also has the best OGL driver historically, since they focus more on getting the thing working, rather then strictly following spec. And there's a lot of paths that simply DO NOT WORK for any vendor. And don't even get me started on the OGL state model.

Going to OGL after working DX is like going back to the first C standard after working on C++ with the latest extensions. Its doable, powerful, but very, very painful.
 


Those cards look epic :) I wouldn't worry about AMD too much- looking through the benchmarks the New 980 is faster, but the 970 is about on par with a 290X (albeit using quite a bit less power). The point is AMD can reduce prices quite happily to compete for now. Remember than the 290 / 290X have been out for about a year so I expect AMD have some new stuff cooking. Interestingly looking at the efficiency chart, AMD's more efficient card is the R9 270 (GCN 1.0)?!

I think the main result of this is- prices are going to have to go down... which is excellent as I'm looking to upgrade soon.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Not only on-topic but also interesting.
 

wh3resmycar

Distinguished




MANTLE....DX12... that was the post you responded to. hmm i wonder what kind of gaming we're talking about here? and apparently both only run on windows, hmmmm. can anybody tell?

been saying this all along, learn comprehension mate, quit pseudo-journalism.

explain again how the 42% linux install base that you mentioned have anything to do with "gaming" that was in context?
 

wh3resmycar

Distinguished


nvidia has a workaround with gameworks or what theoretically (i think) it was supposed to do .
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Their focus on mobile is the correct path, despite they were often criticized by idiots that don't know anything about tech or about market.

Some time ago Nvidia engineers understood that efficiency is the new key parameter for everything, from phones to supercomputers. And they have been working hard on the design of new efficient architectures. I enjoy how Maxwell on same 28nm node brings more performance than its predecessor but dissipates less power. It also puts in his on place to the crow of idiots in forums that believed that efficiency is something of interest only for phones and battery powered devices and that efficient architectures would hurt the performance of HEDT.#

Regarding your question, the launch of K12/Zen will be accompanied by a new GCN architecture is being developed by Koduri. That efficiency is key for the new AMD was also recalled by Papermaster on the 25x20 talk.

It is too soon to compare both companies, but I see no fundamental reason why GCN couldn't be scaled up on efficiency. A first idea would be to change the number of cores per CU. Nvidia has made something similar reducing the number of CUDA cores per SM: 192 --> 128.

# The same happened when Papermaster announced AMD's 25x goal. Lots of idiots on forums pretended that AMD was announcing that future products would be slower than current products.
 


By abstracting everything out, which as we all know, costs performance.

Hence the catch-22: You want easy to manage code? Then you accept a 10%+ performance hit. But hey, you might actually launch a game without game-breaking bugs, and that's a rarity these days...
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


There are developers that prefer OGL over DX and vice verse. There are programmers that prefer C over C++ and vice verse. But subjective opinions about tools is an irrelevant minor point by the next reason.

If you are developing a game for a minority market such as windows-based PC gaming then you can choose between OGL and DX and probably you will chose DX.

But if you are developing a game for the non-windows gaming market then DX is not an option and DX12 is not going to change this basic fact.

I provided before some analysts/AMD predictions about the evolution of gaming market. PC gaming subset will represent only an 6% of the gaming market for 2015--2016. Even if DX12 is the most marvelous thing brought to us by Microsoft, it will be stuck to that tiny 6% of gaming market. However, AMD has a very good opportunity if OGL-Next uses Mantle as base

http://techreport.com/news/26922/amd-hopes-to-put-a-little-mantle-in-opengl-next

because could open GCN hardware to a whole new market where AMD now has zero share.
 

jdwii

Splendid


Man now i do agree with wh3resmycar 100% the original discussion was about Steam OS which is purely PC gaming WTF man. Then you come in with irreverent information on linux(the minority OS for AAA gaming and the major OS for non PC gaming). Just wait and see juan like the A10 is equal to a I5 thing and how Intel in 2016 will create a iGPU stronger then any other video card you will be disproven again and again with those statements. It's not like i want you ban but you really claim very weird things like how linux will in 2 years cream everything microsoft on a gaming front or how GPU's will be gone in 2 years and so on.
What scares me more is if he doesn't learn when he is wrong that is a creepy type of mindset like if he cherry picks a AAA game in the future and says look Microsoft is doomed once again sims 4 is now on a smartphone using opengl(example). Or cherry picks some synthetic benchmark in 2016 to prove Intel has the best GPU in the world(guessing that prediction of his will be put off for a few more years after 2016).

I really can't wait to see K12 i hope its amazing but if its not faster and better then Intel's X86 per core it wouldn't surprise me for 2 reasons. 1 Its Amd and they have limited resources, 2 Arm right now is so far behind Intel's X86 in per core performance its not even funny its only competing with the weakest parts. Not saying Arm could never pull it off but saying as extremetech pointed out the Arm vs X86 thing doesn't matter anymore its not even debatable if a non bias person just reads the article.
 

Its like when they released the 680 and 670, its a nice change every once in a while. It just makes you wonder how high their margins must have been on the 770s and such. I'm sure prices of everything will settle in a couple weeks.
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780
I don't understand how Maxwell is so efficient. Hawaii and GM204 are around the same die size on the same process. So I'm assuming similar number of transistors.

I do feel like AMD and Nvidia have been slacking off with GPUs for quite some time, and it looks like Nvidia was the first to finally update something and they're going to benefit a lot from it.

Looking at the compression thing (I've been up since 4:45 and worked hard today so I'm not up to date with all of this) it looks like Nvidia is doing something with how games are handled to use less resources. If you look at Tom's power consumption (http://www.tomshardware.com/reviews/nvidia-geforce-gtx-980-970-maxwell,3941-12.html) for GPGPU, GM204 doesn't look as fantastic as some of you are saying. GM204 is nearly pulling 300w in stress test and is within 10% of Hawaii for GPGPU.

There's something funky going on with Maxwell and gaming efficiency and it doesn't have to do with the actual architecture alone and it's got to be some sort of combination of things with software. The fact that Tom's suddenly decides to change how they measure power consumption for graphics cards and then goes "wow this new card is amazing" should be a red flag.

There is no way Nvidia magically changed the architecture to be so efficient and use 180w or so when gaming. They are playing software tricks to get the card to not use as many transistors when gaming. Do people think there is some sort of magical architecture only changes that make a ~400mm^2 chip much more efficient in gaming than a ~420mm^2 chip when they are both made in the exact same factories?

I'd love to be wrong and for someone to correct me (I'm way out of it right now) but there's something going on besides "magical Maxwell efficiency that applies to everything (sorry Juan ;-))" and the fact that a stress test shows such discrepancy between gaming tells me either the chip isn't being used entirely for gaming or that there's something going on that lets it use less transistors when gaming.

But Tom's GPGPU efficiency test pretty much shoots what Juan said completely out of the water. In fact Tahiti, which is ancient in GPU terms, is right in line with GTX 970 power consumption. Unfortunately the GPGPU performance numbers seem to be missing from Tom's review (unless I'm that out of it).

Nvidia built a really efficient and good gaming card and it's going to be pretty great in laptops as long as you don't use it for GPGPU ever. I do think they've found some sort of software and hardware tricks to play that let things get a lot more efficient for gaming. It is probably some trickle over from Tegra like Juan is saying .But it's not looking (to me) like it's a strictly hardware thing like Juan is saying.
 


If you just read the tom's power consumption sections you can see they speculate to why the card gains so much gaming efficiency. Its can be very simple that nvidia's powergating is very well implemented and is comparable to what intel is currently able to push CPUs down to a very low SDP while having a TDP that is multiple times higher than the SDP. Nvidia just doesn't report TPD the same way and the TDP for the 980 should actually be called the SDP if you look at the 300W spikes on the 980.
 

wh3resmycar

Distinguished
what could be responsible to it is the power management technology they implemented, where they can move power away from unnecessary components for specific tasks and move it to the graphics core if the linustechtips preview is to be trusted.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
Jon Peddie analysis:

The primary contributor to Maxwell’s improved efficiency is the new architecture of its streaming multiprocessors (called SMMs) that achieves much higher power efficiency and 35% more performance per Cuda core on shader-limited workloads. Each SMM has four processing blocks, each with its own instruction buffer, scheduler, and 32 Cuda cores. The partitioning simplifies the design and scheduling logic, saving area and power, and reduces computation latency.

Each SMM is significantly smaller than the elements used in the prior Kepler chips, while still delivering about 90% of their performance. The smaller area enables Nvidia to pack more SMMs per GPU, helping Nvidia deliver twice the power efficiency in a years’ time without the need to go beyond 28 nm.

Thus he is confirming my previous thoughts that improved efficiency is a consequence of the new architecture and specially of the reduction in size of the SM processors. :D
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


Yeah they've basically changed from TDP to SDP. Which is fine if your focus is gaming. It may just be the early drivers too because they could certainly cap the TDP for CUDA/OpenCL if they wanted to. They still listed a 500W power supply requirement for the 970.

As for the power efficiency they cranked up the caches, lowered memory bandwidth requirements with better texture compression (Tonga did that too), and a new type of AA (MFAA). Combined it stacked up well but the kicker is the price. Since when did NVidia price something reasonably? ;)
 


Beat me to it, haha. Yeah the 970 is the real winner here, just like the 670 was when the 680 was presented to the world.

For AMD, I expect the 390X to be nice: hopefully a winner.

In any case. I'm still waiting for AMD to tell me all about the new socket and how DDR4 is the best thing ever, haha.

Cheers!
 
Status
Not open for further replies.