AMD CPUs, SoC Rumors and Speculations Temp. thread 2

Page 9 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
@yuka that is true- amds latest top mobile apus are pretty solid imo, but good luck finding them in anything other than bargain basement laptops 🙁 something like a nice thin Dell xps, with a top carrizo in it would definitely interest me.
 
Facebook wants to use them APUs? That Mark be crazy I tell you! Haha.

Well, thanks for that de5_Roy, at least that gives me some kind of hope Dell or HP would have the marbles to try and do something nice with that. Who knows? Maybe the next gen Consoles will come out before 2017; sans Nintendo's. Uhm... What if Nintendo wants to use this as well? Is there any news around what Nintendo is doing with NX?

Cheers!
 

amd will likely be the soc supplier but don't expect anything close to xbone(r) level raw performance or configuration. from the rumors, i think nintendo intends to sell on content alone like it tried with wii u.
this is what i said when i read about the rumor, but the later info pointed towards a weaker soc.
http://www.tomsguide.com/forum/id-2712614/nintendo-console-win.html
 
Well, now that Mr Iwata is gone, I don't know what Nintendo will do. Project NX is under some one else's leadership and from what I know, it has "white flag" to come up with anything. I don't think they have unlimited budget, but one thing is very possible: AMD is going to supply an SoC for them. I really really hope Nintendo is willing to use this or at least knows about it. It would be interesting to see Nintendo get back in the Console business on par with Sony and MS. I think this is their biggest shot of doing it.

In any case, I really like the speculative talk about Facebook using this. I wonder what they'll do with it if that is the case. Data crunching with an APU sounds interesting. Specially compressed text searches in real time.

Cheers!
 
Years ago I saw the prototype AMD was using to design their HBM controller. It was 2 Xilinx FPGA slices around a single HBM stack. Xilinx doesn't just give bare die to anyone, so they were on really good terms. That patent shows they learned a lot more from that exercise than expected.

They're using the FPGA logic to make a really advanced memory controller that is reconfigurable. Offloading compression/decompression/encryption/endian translation/sorting/wear leveling/ECC/etc. Cool stuff.
 


They would be foolish to go the weaker route again. With the time frame you're looking at the 16nm node so there is no reason to skimp. Even a modest improvement over the PS4 would be practically free.
 

unfortunately, that's exactly what nintendo seemingly doesn't understand. i think it's trying to save money (given it's current situation) during the r&d phase as well as trying to sell the console at a profit using the gains from 14-16nm process.
but i hope they at least match the ps4 in gpu specs.
 


Can't have the same companies do the same thing doesn't work out that way. Nintendo tries to stand out if they made a powerful console with a normal controller sales would be worse then the N64 or GC. Been following Nintendo longer then anything in the PC, Nintendo hasn't had the hard core fanbase since the PS1. Of course this isn't just about how many tflops a console can produce its over Nintendo's horrible way they treat 3rd party and now they make 3rd party try and make their games stand out with odd controls which takes extra time and historically 3rd party doesn't sell well on Nintendo systems so why should 3rd party try so hard?

I really think NX will probably bring the portable market and console market together and to be blunt i don't expect Amd to make much on the system and i think it could be a hit or miss nothing in between. If Nintendo doesn't get 3rd party back or show the gaming community(or sadly the casual market who now mostly moved onto smartphones) they still matter besides their own games things might dramatically change in the company.

As for Amd making big $$$ forget it they won't from Nintendo, Nintendo refuses to sell their consoles at a loss.
 


Paper looks great. Show me a product and I will believe it.

Considering that HMC (Intels next memory standard) is to be 160GB/s (so I assume they will also be pushing a new interface instead of QPI/DMI) AMD needs something to compete with that.

http://wccftech.com/amd-r9-fury-unlocked-to-fury-x-new-cuinfo-tool/

This will be interesting. I am sure there will be plenty of people who were able to unlock but the CUs cause artifacts, crashing and down-clocking or other issues.
 


I read through the thread- every Fury on there was unlockable to some extent. Most of them you could unlock 4 of the 8 disabled CU's successfully. One guy got the whole GPU working but had to down clock 50 mhz to keep it stable (8 extra cu's for a 50mhz clock speed drop strikes me as a good trade). Looks like the Fiji yields must be fairly good for them to be unlocking so easily...
 
It remains to be seen though. Unlocking potentially could mean leakage that could damage the GPU core or cause artifacts in games that later take more advantage of the available power.

I personally never would do that.

And I doubt the yields are that good. They have had inventory issues since day 1 which tells me that yields are low and that they could be binning the few somewhat bad ones.
 


XBox One has something going on like that. I'd bet AMD took some "inspiration" from there.

And I would never ever suggest anyone unlocking a GPU or CPU without a safe-proof method to go back or make sure it can be unlocked. Although AMD does have the BIOS switch... I wonder if that will save someone from a bad flash.

Cheers!
 


There was one guy who had a bad flash, apparently you can get around it by slotting in a second AMD gpu into the system and booting windows in safe mode, then re flash the Fury card again (as one of the Fury cards only has single bios). So far no one managed to brick the card permanently, though most people noted artifacts when fully unlocking the core.

My point is I think AMD were being quite conservative disabling 8 cu's when actually most of the cards appear perfectly stable with only 4 disabled (of course it is luck of the draw).
 


A true overclocker has got some impressive numbers. A 100% overclock on the memory even. Turning HBM1 into HBM2. 😉

http://forum.hwbot.org/showthread.php?p=400893#post400893

Yes LN2 is involved. 😍
 
LN2 is nice but it is like the Phenom IIs. 8GHz on LN2 but on air/water not enough to beat an i5/i7.

I do see one thing with Zen. It will probably clock lower than BD. BD clocks higher thanks to the design and higher leakage, it is a high clocks/low IPC design much like Netburst was. But Zen will be a high IPC part so I think we can expect to see stock clocks closer to Phenom II/Intels 8 core i7s and overclocks much like Intels current lineup.

A theory of course.
 


The 2.0 version of the HMC spec increases BW to 480 GB/s. It is worth mentioning that HMC has been developed by ARM, IBM, Samsung,... AMD is neither developer nor adopter. AMD is stuck to HBM which is poor in any metric except cost.
 
Juan, hbm isn't 'poor' given the simple fact: hbm exists now, hmc is a way off (especially vs 2!).

There are two scenarios I see happening, 1: by the time hmc vs 2 is out we'll already be on hbm 3 which will invariably offer similar performance, it 2: if hmc is fundamentally superior then when it's actually available amd will switch to it after having a good couple of years use of hbm first. Either way hbm can't be viewed as a failure. I mean if it's that bad why is nvidia using it in Pascal?
 


Well, let's not forget RDRAM and XDR.

You don't need to be technically superior to win. Being cheap and effective is more important. AMD is doing good with HBM. One of the few things you can say they got 100% right in the first go. Well, other than the 4GB cap, haha.

Cheers!
 


Yeah the 4gb cap is a pain though that looks to be fixed when we get HBM 2 next year. What I would say though is HBM has achieved a lot of firsts (e.g. through silicon vias, the interposer, first commercial 3D stacked chip I'm aware of etc).

I mean the argument of 'future technology X is waaaay better than current technology Y' is kinda silly. It's tech so thats always true. I mean there are so many things in development (graphene based chips, quantum computing, optical chips and a whole host of other way out there ideas all vying to replace silicon that in *theory* will all wipe the floor with current designs). I mean would it be fair to Intel to say Sandy Bridge sucked when it came out on the basis of 'cause Haswell is gunna be so much better'?

With a few exceptions almost all tech is built in small steps- HBM may not finish up being the definitive long term memory solution, but right now it covers a number of firsts and does provide significantly more bandwidth than what's available so on that level it's already a successful design, and it already has future wins lined up so it's going to have a least a bit of life- I expect to see it in the next couple of generations of GPU from both sides.
 


No. HMC has existed for a while before HBM. The HMC 2 spec was published past year. And HBM 3 doesn't even exist; thus, it is difficult to say from where you got that it "will invariably offer similar performance" for your scenario (1).

AMD is neither developer nor adopter of HMC. Once again AMD pushes the inferior technology, whereas others push superior technology: ARM, IBM, Intel, Fujitsu, Samsung, Cray, Google, Huawei, NEC...

I didn't say that HBM is a failure. I said that it is inferior to HBM on any metric except costs.

For your question about Nvidia, HBM has been developed as JEDEC replacement of GDDR5. Thus it is natural that will appear in GPUs from Nvidia.
 


How has 'HMC' existed for a while before HBM? On paper? I'm talking about *physical product*. The HMC 2 spec may well be out but where is there a shipping product that incorporates it? AMD have HBM on product and shipping *now*. I'f I've missed HMC then please show me where it's in use as I'd genuinely be interested. If however it's just specification then your point is moot and in theory a possible 'HBM 3' could be out by the time HMC 2 is out (seeing as we haven't seen HMC 1 yet!).

Edit 2: A comparison of Wide IO, HMC and HBM, it's interesting reading:
http://www.extremetech.com/computing/197720-beyond-ddr4-understand-the-differences-between-wide-io-hbm-and-hybrid-memory-cube/2

HMC and HBM share many similarities, and interestingly HBM is a jdec standard while HMC isn't. It hardly looks like HBM is a poorer version of anything, rather HBM is gpu focused, and HMC is aimed more at CPUs.... what am I missing here? This isn't AMD pushing inferior tech, this is AMD using the appropriate tech for their products.
 
Uh oh, AMD investors are starting to get skittish; down another 8%:

http://blogs.barrons.com/techtraderdaily/2015/08/10/whats-up-with-amd-3/

A couple points are worth mentioning:

McConnell asked Papermaster about the sharp drop-off in R&D levels at AMD, stating “when I talk to investors about AMD, there’s some concern — I mean, we’ve seen a decline by close to 40% versus levels we were at in the beginning of the decade.”

40% reduction in R&D spending.

Where we’ve been incredibly protective in maintained investment is in where we are banking the future of the Company. So it is on that next generation of CPUs starting with Zen. It is on successive generations of our graphics core next. Huge volume in what we have in not only in discrete graphics and our APU, but the game console wins are all on Graphics Core Next and we have a very strong roadmap for that Graphics Core Next IP going forward.

Discrete GPUs? Aren't those supposed to go away?
 


Few tests, but I did like their power analysis. Simple, but very straight forward.

In regards to the results, I also liked what I saw. Carrizo seems to deliver on the efficiency promise made by AMD. And H.265 smooth playback is a big deal. HTPC owners might want to take notice. I know I do 😛



Not disagreeing (again), but GCN does not imply Discrete GPUs 😛

Cheers!

EDIT: Discrete GPUs *only*; to deepen a bit more the idea: they'll keep producing them as long as there is a market. I think that is a no brainer.
 
Status
Not open for further replies.