AMD CPU speculation... and expert conjecture

Page 735 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Well... An MSRP of USD$700 sounds about right in terms of where it will sit. As much as it hurts to think about it, so be it.

I wonder how much it will go up when retailers say "but there are no cards!". I'm always wearing my tinfoil hat for those type of comments from them, haha.

Cheers!
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


The slides only say what will happen: high-performance dGPUs will be replaced by top-end SoCs. The why is explained in the APUsilicon article: 200--300W SoCs are much more faster and efficient.

http://apusilicon.com/high-performance-amd-apu/

AMD is not giving anything to Nvidia. In fact Nvidia engineers will do the same than AMD:

 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780


http://blog.pgaddict.com/posts/postgresql-performance-with-gcc-clang-and-icc

Did you miss this one? GCC 4.9.1 faster than ICC. Older versions of GCC were basically lacking a lot of features that the proprietary compilers were missing. That has been fixed in newer versions. The problem with old GCC is that it was simply not optimizing code the same way the new ones were. That's (mostly) fixed now and GCC has auto-vectorization and goodies like that now.

Meanwhile, some applications do run better on ICC: http://stackoverflow.com/questions/1733627/anyone-here-has-benchmarked-intel-c-compiler-and-gcc

So it's a mixed bag, it's not so clear cut. But regardless of what's better, it still raises the fact that some are better at some tasks on different CPUs.

AMD basically abandoned Open64 (it hasn't been updated in almost 2 years) and they just contribute to GCC now. All of this is new since 2013.

Compilers do far more than just decide which instructions get executed. For GCC and command line switches I have the ability to tell the compiler to do things like optimize for specific cache sizes, etc. And as Juan's link showed, compiler tuning can decide how much of the theoretical maximum of IPC you can use by reducing cache misses and stuff. Lots of stuff here.
https://gcc.gnu.org/onlinedocs/gcc/Option-Summary.html

GCC itself has seen improvements of 15% or more simply by going from version 4.1 to 4.9. There's far more than just choosing which instructions get ran by the compiler. ICC, when it had a massive lead over the competition in nearly everything, and the places where it still does, doesn't get those gains just from running different instructions. IT does so by creating code which will work the best with the architecture they have.

Your benchmarks are out dated. GCC has changed a lot: https://www.p8952.info/ruby/2014/12/12/benchmarking-ruby-with-gcc-and-clang.html
The difference between GCC 4.4 and 4.9 is around 70%! That's massive! Like I said, GCC has come an extremely long way since the older versions and the older versions were definitely missing optimizations features.

LTO alone is a big change that earlier GCC versions completely lacked while other compilers had it.

Are you saying AMD is going to give up in the dGPU market completely or just for HPC? I can see them dropping it for HPC but for HEDT and semi-custom I'm not sure it makes that much sense for a while still. When we hit 7nm an x86 CPU is going to be so small that it's barely going to contribute to the overall TDP and die area anyways.

I can see dGPU going away when x86 core is small enough that it barely takes up die space. But that's not going to be for a while, Even Zen core on 14nm is rumored 95w TDP maximum with ~5 to ~8 mm^2 per core (not including caches and stuff). Carrizo core is tiny part of the die, but look:
OEmIyQb.jpg


Even with the CPU core being so small, there's a ton of other stuff taking up space on Carrizo. That's a ~245mm^2 die. Less than half is GPU, the rest is CPU and associated logic.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I cannot talk by others, but my estimation of Zen core size (5--6mm²) includes L1 caches.
 


SQL Query execution? REALLY? Come on man, you can do better then that. It's almost like you went out of your way to cherry pick benchmarks to make your point.

Compilers do far more than just decide which instructions get executed. For GCC and command line switches I have the ability to tell the compiler to do things like optimize for specific cache sizes, etc. And as Juan's link showed, compiler tuning can decide how much of the theoretical maximum of IPC you can use by reducing cache misses and stuff. Lots of stuff here.
https://gcc.gnu.org/onlinedocs/gcc/Option-Summary.html

That's nothing new. Back in the Core 2 days, I remember doing manual cache management because Cores 0/2 and 1/3
had access to the same cache. Now we're just having the compiler switches handle it instead, because no one likes coding that low level anymore.

And that's the point. If you don't optimize for CPU arch, performance is going to SUCK across the board. If you have the compiler fine tune for each specific architecture, it's a fair comparison, since your comparing "best case" to "best case". The fact the code needed to get there is irrelevant.

but hey, I can write the SAME EXACT assembly code that will execute for both AMD and intel, but I can guarantee, the performance for one of them will be unexpectedly low, because of some CPU architecture specific effect that I didn't account for. Again, that's the entire purposes of having compilers in the first place.

Are you saying AMD is going to give up in the dGPU market completely or just for HPC? I can see them dropping it for HPC but for HEDT and semi-custom I'm not sure it makes that much sense for a while still. When we hit 7nm an x86 CPU is going to be so small that it's barely going to contribute to the overall TDP and die area anyways.

Except dGPU is very low sales (comparatively), high design costs, and AMD has to compete directly against NVIDIA. Or, they can go the SoC route, compete against Qualcomm and Co, and maybe make a boatload because of mobile growth. For a company with limited financial resources, which approach makes more sense?
 

8350rocks

Distinguished


Except mobile is saturated, and they look poised to put radeon graphics on mediatek chips.

That kind of puts holes in that theory. Also, dGPU is not terribly low sales, and the margins are typically high. More people buy a new dGPU upgrade than buy a new PC...that really says something does it not?
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


Compete with a 12B company (NVidia) or a 115B company (Qualcomm)? NVidia obviously.
Remember NVIdia was trying to compete with Qualcomm and lost, having to fork off into car market instead of phones/tablets.
 

8350rocks

Distinguished


Yes, NVidia has been trying significantly longer, and every year their "next best thing" comes out, and turns out to be a polished horse apple compared to the "next best thing" from qualcomm and samsung. Then they go back to putting it into their own proprietary device for that year, with one or less design wins aside from that; all the while that single design win (when they do get one) always seems to be a pity bid to get someone to take it, and it is never a significant design that does any volume....

However, AMD should enter the mobile market...right? right??

Yeah...not a good plan. Qualcomm has been tinkering with ARM for a ridiculous amount of time and have lots of really good IP for ARM and mobile in particular, like proprietary modem technologies to name just one arena where they have a significant IP advantage.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Data shows otherwise with dGPU sales in free fall
JPR-GPU-Chart-1-e1400511767983.png

High margin of small market doesn't sustain company long-run. Reason why both Nvidia and AMD are migrating to new markets based in integrated graphics.



Qualcomm has announced will make ARM microserver SoCs.
Nvidia has stated will not enter the microserver market.
AMD makes ARM microserver SoCs.

In general, an analysis of AMD roadmaps and strategies shows that AMD will compete with Qualcomm, Broadcom, Applied Micro, Cavium, Intel, Nvidia...
 

genz

Distinguished


Hey, you don't have to beat Qualcomm, you only have to make a profit. Don't let the size of the companies make you think that it's impossible to profit in the market, I mean it's not like AMD, a 2B company hasn't been directly battling Intel, a 146B company for the last 3 decades. Hell, ARM came from literally nowhere in that time.

Most importantly is making the best use of your assets. AMD has a efficiency disadvantage against Nvidia, but no desire to be as anti-competitive as Nvidia and make only SoCs with purely their own tech. If they can pull out a iGPU with the efficiency, it doesn't matter who their competing with, they stand to gain in all realms.

What everyone here misses out on is that Mediatek is pretty much the AMD of mobile, and it doesn't need design wins with Samsung etc etc, because it occupies a space that pretty much nothing but legacy products do. Mediatek makes REALLY cheap chips with decent performance and low margins. AMD could jump into that low priced device market and make a shedload competing with Mediatek, but they can make even more if they partner with Mediatek and take on an otherwise empty budget ARM market.
 
If AMD wants to enter into the Mobile territory, they'll have to do it under Intel's shadow. All of the heavy lifting (talk with Google, make Apple use X86, etc) has been done by Intel so far. That's a LOT of money invested in time and securing deals for it. Look how much they've been bleeding money with their low power division. That's a testament to how badly Intel wants that piece of the pie; specially because power hungry designs are not the cool thing anymore.

So, once Intel secures a place in the race, AMD will follow suit somehow. For now, they're just trying desperately not to stay too far behind, I'd say. In any case, AMD won't dive into that market on their own; that's for sure. I'm still wondering how nVidia has managed to get 4 failed (from the sales perspective) SoCs into the market and still stay afloat and wanting to keep pursuing it (unless they bailed out).

Cheers!
 

8350rocks

Distinguished


It is falling from a ridiculously crazy number...sure...look at the graph, it has been falling for a few years now...and yet...there were still 400 million dGPUs purchased last year...

Hmm...400 million x 35% (historical average) = 140 mil x 30% margins...

Sure seems like a raw deal there...they should totally not make 30% margins on 140 mil units this year...
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


First, AMD had less than 20% of market share during 2014. Second, dGPU is very small margin business

400 million x 17% = 68 million units
68 million x 5% margin = 3.4 millions

which is one order of magnitude less than your numbers. However, we are not discussing the dGPU market of yesterday. We are discussing how irrelevant will be the dGPU market of the future.

Do you know why ATI merged with AMD? Because ATI knew that dGPU business wouldn't last forever...

ATI's goal is to continue to grow at a rate of 20% per year, but maintaining that growth rate becomes increasingly more difficult as an independent GPU manufacturer. The AMD acquisition will give ATI the ability to compete in areas that it hasn't before, while also giving the company the stable footing it needs to maintaining aggressive growth.

The AMD/ATI acquisition doesn’t make a whole lot of sense on the discrete graphics side if you view the evolution of PC graphics as something that will continue to keep the CPU and the GPU separate. If you look at things from another angle, one that isn’t too far fetched we might add, the acquisition is extremely important.

Some game developers have been predicting for quite some time that CPUs and GPUs were on this crash course and would eventually be merged into a single device. The idea is that GPUs strive, with each generation, to become more general purpose and more programmable; in essence, with each GPU generation ATI and NVIDIA take one more step to being CPU manufacturers. Obviously the GPU is still geared towards running 3D games rather than Microsoft Word, but the idea is that at some point, the GPU will become general purpose enough that it may start encroaching into the territory of the CPU makers or better yet, it may become general purpose enough that AMD and Intel want to make their own.

It’s tough to say if and when this convergence between the CPU and GPU would happen, but if it did and you were in ATI’s position, you’d probably want to be allied with a CPU maker in order to have some hope of staying alive. The 3D revolution killed off basically all giants in the graphics industry and spawned new ones, two of which we’re talking about today. What ATI is hoping to gain from this acquisition is protection from being killed off if the CPU and GPU do go through a merger of sorts.

The "Smooth and efficient transition from dGPU to SoC" mentioned by AMD during recent meeting is something has been planned many years ago...
 

anxiousinfusion

Distinguished
Jul 1, 2011
1,035
0
19,360


Did you just ninja-edit the tail end of that graph?

graphics-shipments.jpg


EDIT: These were probably taken at different points throughout 2014 so ignore.
 

griptwister

Distinguished
Oct 7, 2012
1,437
0
19,460
My predictions are that the R9 390X 4Gb (Air cooled) will be around $700 while the liquid cooled option will Have a $200 premium because of the 8Gb of VRAM and Cooling unit.

GPUs are actually really exciting right now. I just hope it's not a flop. Lol, I really hope AMD is low balling the benchmarks. Cause if the performance is more, Nvidia is kind of screwed. And if what Juan said is true, they gained 10% of the market since that time? If my memory serves me correct. I don't know guys. AMD can pull off an amazing win.

Also, How about those CPUs? Intel raised the price on their 5820K. AMD lowered their price. Let's hope they have a trick up their sleeve. And Excavator, any new news on that?
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


They are two different figures from JPR including different colors. They look as snapshots at different times. By examining the url of the images, the yours seems to be the more modern image. Note however, that the changes are due to small increase in notebook sales. The desktop GPUs don't change and are in free fall in both.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Hum, I have seen dozens of third-party designs with Nvidia SoCs. And last Tegra SoCs are winning industry awards:

As the Best Mobile Processor, we chose Nvidia’s Tegra K1-64. Currently shipping in the Nexus 9 tablet, the K1-64 was one of the first merchant ARMv8 processors. Unlike competing products based on Cortex-A53 or Cortex-A57, it differentiates using an Nvidia-designed CPU known as Denver. Initial benchmarks on the Nexus 9 tablet show that the 2.3GHz CPU delivers about 35% better performance than a 1.9GHz Cortex-A57 when both are running 64-bit code. In fact, the K1-64 delivers better single-thread performance than any other mobile processor, including the Apple A8.

Analysts’ Choice Winners for 2014
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780
I think dGPU sales slowing down has more to do with most PC games being xbox 360 ports designed to run on something around an x1800 or so and customers not upgrading from 1080p.

Also gamerk, you make a good point in regards to optimizing for architectures with writing just an assembly program and having different performance. This sounds sort of crazy, but you can't measure something's IPC with software. You have to do it theoretically by looking at the architecture and deciding what it's capable of under optimal conditions.

At the very least, I think we can agree that IPC is completely useless metric when discussing CPU performance. So far, we've established

1. Hand written assembly programs (basicaly a list of instructions) won't perform the same
2. Compilers can have large differences in performance (GCC 4.9 can be 70% faster than GCC 4.1)
3. Software will not be equally fair to different CPUs.

So, I think we have discussed IPC is a useless metric, and of course the obvious solution is to look at what programs you will actually run and see how the CPU performs with that software. That's all you have to do. Extrapolating performance via IPC is a complete waste of time.

However I noticed that we have basically disproved someone's outlandish claims of IPC being some sort of superior metric to measure all CPU performance, and that person has already given up discussing IPC (because he lacks so much self-awareness that he can't even realize he is wrong) and is moving goal posts to saying that dGPU is going to disappear. All of this after basically losing completely to arguments that IPC measured by random pieces of software is a useless metric.

Also, that result was one of the first ones in search for me for "gcc icc benchmark". They are really hard to come by, it seems like ICC is really expensive and the people who pay for it don't want to bother running benchmarks. At the very least, the fact still stands that changing compilers can completely change a CPU's IPC if you treat IPC the way that Juan does.

Anyways, I'm sure he will continue to troll the forum by causing us to discuss things that are a complete waste of time. And when he is proven wrong, he will change topics again and speak with his ignorant, over-confident attitude that has existed since he completely blew his "Kaveri will be equal to 2500k here is my math I got them from a science paper xD!" predictions.

If anyone who is lurking wants a good laugh, he's in semiaccurate forums talking about how IPC is the end all, be all of CPU performance and that his estimates that I have yet to see be correct at all leave Zen significantly behind a regular ARM chip.

But he won't even address this. He'll just start trolling the forum into talking about dGPU now. He's a troll, that's all there is to it. I wonder why he's not banned.

EDIT: So can we please stop derailing this thread with predictions about dGPU? I feel Juan still hasn't made his case properly and he is trying to completely ignore the holes in his argument and try and divert attention to the fact he may be wrong.

<Moderator Warning: Let's watch the personal digs and keep the conversation civil. The warning posted 2 pages back still stands>
 

jdwii

Splendid


Like i said before its using HBM memory if it was even close i would call them pathetic. I'd like to see Nvidia use the same type of memory soon and then compare BUT i suspect Nvidia won't have anything like that for awhile
 
Status
Not open for further replies.