AMD CPUs, SoC Rumors and Speculations Temp. thread 2

Page 16 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
I wouldn't say that GCN was a bad decision, it is great with computing despite being a bit worse in games (efficiency and other metrics).
BD was a bad decision, a very bad one, and AMD could be in a much better position had they evolved the Phenom or redesigned it. They created a specialized chip for a broad usage market, where the general purpose logic (Phenom, Core 2, i3/i5/i7) is way better.

But what they really, deeply failed at, was marketing. Even being a bad overall CPU, the FX series was awesome for threaded work given their price. Games and most software runs almost as good on an AMD CPU as on an Intel one. Also, their chips always had more value (cheaper for same perf), so they should promote them massively in poorer countries, like Brazil, India and China, eventually turning into big markets for them.
Nvidia had a lot of people using CUDA because they created a sales force to help people use it. Should AMD do that with OpenCL or other compute libraries, they could have taken the throne.

What really saddens me is to see that AMD did have good products, but ultimatelly failed in letting people know it. They will have an opportunity of a fresh start with Zen, but from the ground up instead of building on previous successes and brands.
 


K10 was STARS cores. Phenom II, Deneb, was considered K10.5 by many here because it was not only a die shrink but a change to a lot of the back end of the core and was superior. Llano is based on K10.5 as is anything after Deneb but they are all still based on the STARS core.
 


What doom and gloom? I only see people here stating facts and reacting to hype.
 


You made me doubt, so I'll lay it out how I remember it:

K10 was Phenom I at 65nm (Agena?) and all the way onto Phenom II at 45nm (Thuban the last one and Calisto the first). K10.5 was usually referred to name Llano's last K10 based uArch (I remember it as code named "STARS", but I might be wrong on that), but I never ever read about it being referenced to Deneb or even Thuban, they were just plain "K10".

But in any case, I think it's not something worth discussing further. The point I was making still stands in my mind: K10 was not a bad uArch; it was a good evolutionary step of the K8/9 uArch (it *was* faster and efficient, not like BD was to K10). The problem AMD had was Conroe caught them with the pants down since it was so much better. Even with the "ugly" dual dual approach, it was better all around.

@salgado18:

I don't think GCN is a bad uArch. For the intended purposes AMD had with it originally, it's good. It's a jack of all trades, but master of none; has good/decent performance, decent efficiency and fits well everywhere they wanted to slap it onto abiding to HSA. If you take the fat from it and focus it for computing and create a sub set that is not HSA compliant for mid-range, you will have another story on efficiency and performance for games and AMD. Remember how much crap nVidia got with Fermi and AMD still on VLIW4 in terms of efficiency and performance? It's the point I'm trying to make on this.

Cheers!

EDIT: Typos.
 
Yeah, [EDITED: GCN] isn't a bad architecture, it's just been around too long without major revisions. And it's starting to show, as even HBM couldn't overcome the relative lack of processing power compared to NVIDIA. Of course, new architectures require cash, which AMD is very short on at the moment.

Conspiracy theory: AMD knew they'd be stuck with GCN for many years, and come up with Mantel, which I noted very early on was tied to the hip with GCN, to try and overcome their architectural deficiencies. In short, Mantel was an attempt to stretch the lifespan of the GCN arch via software, so AMD wouldn't have to spend R&D funds they simply don't have in it's eventual replacement.
 
Yuka, http://www.tomshardware.com/reviews/spider-weaves-web,1728-21.html

It had a lot of issues to start with the first generation of K10. Low clock speeds made it a hard choice for people to jump from a Athlon 64 X2 to it since they would lose performance in the majority of apps (remember more than 2 cores back then were not always better in most real world apps) and it was just slower than Intels equivalent.

K10 was not bad, the core design, but the first iteration was poorly implemented.
 


Have you read the reviews on Nano yet? :)

http://www.tomshardware.co.uk/amd-radeon-r9-nano,review-33301-12.html

The nano matches maxwell in perf / w by using a more aggressive power tune and lower clock speed. A good part of AMD's power problem is GCN doesn't like speeds above 1ghz, which is where they've positioned most of their cards.

Really AMD should be using wider GCN cores at lower clock speeds against nVidia to hit the GCN perf / W sweet spot. Instead they've tried to cut costs by pushing smaller GPU's to their absolute limit, which increases power consumption massively. Like many said, would have been nice to see Tonga as the replacement for Pitcarin for example.

As for Mantle, I think AMD wanted to show off just what GCN can do- it's A-Sync compute capability for example was obviously well ahead of it's time (with the 290X punching well above it's weight in Ashes). I agree that it's been stretched too far though (I think my main gripe is that whilst there have been new revisions, they've only done part of the line up, we don't actually know what a mid range GCN 1.2 card would be like because they never made it).

At least with the new process tech they *have* to launch a full volley of new GPU's (surely?!).
 


It seems to be pretty spot on with what I was assuming. Better than the 970 Mini, match the 980 but nothing more. So if NVidia decides to launch a GTX 980 Mini it would pretty much be a back and forth war.

So unless you plan to build in a mITX case that can only support short cards it is a ok option but not for normal desktops TBH.
 


Notice the 1Ghz clock disadvantage? Also, like I said, Phenom I had a lot of issues due to 65nm sucking, not uArch. I have respect for the K10, since it's the culmination of K7's original idea.

And I can't quote more than 1 post in a row, so I'll edit to add the second quote, haha.

EDIT:



Like I said in the article. It's a nice card, just not 650 nice. 500 nice and we're talking business. The value proposition is very poor, no matter how you slice it. Still, it's nice to see AMD trying to do something showing GCN+HBM strengths.

Also, one caveat. The power consumption and overall efficiency cannot be attributed to GCN only. Remember HBM is sucking 25% less juice than the GDDR5 slapped onto the GTX970; at least I think it was a 25% difference overall. Not taking the shine away, but keep that in mind.

Cheers!
 


Well if we're talking value, the Fury Pro is a really nice buy- sitting close to the 980ti in performance, whilst sitting closer to the 980 in price. It's power consumption isn't too bad either.

The Nano is a niche product I think mainly to prove a point. The standard Fury is the value option and imo it's pretty good. Fury X is overpriced but I guess that's down to the water cooler more than anything else.
 
Ah that's why i was wondering how the nano did so well in performance per watt its over HBM not over them having a on par architecture to Maxwell for FPS/ per watt. Remember guys Nvidia will have HBM2 next year and their more efficient design unless Amd has something to show next year as well, i truly hope its not just a die shrink with HBM2.

 


There are a separate thread about GPUs, but since your comments are about the architecture I will reply here, because also fits APUs/SoCs.

First, you are comparing (GCN+HBM) vs (Maxwell+GDDR5). Once Nvidia moves to HBM2 the next year the gap will increase again.

Second, AMD increased efficiency by reducing clocks, which is pretty easy, not by improving the microarchitecture, like Nvidia did.

Third, GCN is an inefficient gaming architecture, period. AMD cannot reduce clocks and compensate the performance lost by using more GCN cores or wider cores, as you suggest. The dies are already big. Fiji is 600mm², which must be near TSMC limits (it is 650 mm² if I recall correctly).
 
(i)
The firm beyond the 20% purchase of AMD shares is probably preparing the sale of some division/part of AMD. That 20% accounts for about $300 million. For the purpose of giving context. Nvidia spend more than $1200 million on annual R&D. Thus those $300 millions received by AMD are welcomed, but will not solve the big problems of AMD.

(ii)
The reorganization of AMD will be followed by new reorgnaizations and reductions in the next months. Reasons? AMD cannot be maintained affloat in its current size.

(iii)
As expected Zen is having problems. HPC customers choosing Intel and Power9 CPUs was a hint

http://www.digitimes.com/news/a20150910PD202.html

I guess it is a problem with max. frequencies. Samsung 14nm node is optimized for mobile. Thus probably Glofo is having problems on tweaking it for >3GHz.

I suppose it explains why competitors chose TSMC 16FF+.
 


I know the clock speed disadvantage but that was also part of the issue. They couldn't get much higher than 3GHz on the 65nm and that uArch. As I said, Phenom II corrected a lot since it had a better process and improved the base K10.

Again, K10 was the first stumble. Imagine the hype that was built up for the first monolithic quad core with an IMC. Now take that massive hype and deliver a product that failed to meet expectations, didn't clock high enough to even convince AMD owners to move on then had a patch for the firmware that killed performance even more, and on some it was not optional. That is why I say it was a stumble, not a flop.
 

I've read people saying that nvidia removed 'hardware schedulers' to get Maxwells efficiency up so high but this will be Maxwell Achilles heel in dx12?? Does the boost gcn will get from dx12 not mean it will also get more efficient as in do more work for the same power?
 
@juan, agreed hbm plays a part, but Fiji as a gpu does have some nice advances too (ok these are technically in Tonga as well)- the big reason nano is as efficient as it is, is down to AMD finally getting power tune to do its job (paraphrasing from the TH article). This is essentially exactly the same thing nvidia did with Maxwell- the underlying design isn't that much more efficient than Kepler, it's the active power management that helps prevent energy getting wasted.

This is where I think amd get poor press. If nvidia do something is obviously great design and advanced tech, if amd do said thing it's down to luck or the competition will do it better. There's a bit more to nano than just a clock speed drop.
 


I can't really argue with your first point ( i ), but I can say this firm doesn't have a full say on what to do. They still have to convince the rest of the Board to go ahead with any big sale or spin off. They will have leverage, since they're bringing the money in, but I just hope that is not the case.

And the third point ( iii ) is sad, really. Being "reportedly" ready for Q4 '16 is very late to the game. That means, Q1 or Q2 '17 to be able to buy the chip, right? Also, I don't think AMD has much options here. TSMC doesn't have much more capacity as it stands, from what I remember. And there are simple no other fabs out there that come even close to Intel, so they have to bet on GloFo getting 14nm working soon.

Cheers!
 


This is why I laugh when people see articles for the companies and their process. I said it before and I will say it again, if Intel has troubles with a process node everyone will run into problems. Intel has been doing it long enough that they are typically the ones to watch on how a process will typically go.

It sucks because that is going to be too late to do any sort of damage to Intel unless Zen is just that good which I wont hold my breath for.
 


Expected schedule
Zen design: finished
Zen tape-out: finished
Zen engineering samples: April 2016
Zen production candidate chips: July 2016
Zen production ready chips: September 2016
Zen launch: October 2016

The launch for server CPUs only with PC CPUs coming latter and Zen-based APUs for 2017.
 
Status
Not open for further replies.