AMD CPUs, SoC Rumors and Speculations Temp. thread 2

Page 68 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
@8350 Polaris is a glofo part on the joint glfo / Samsung 14nm ff node (tsmc don't have a 14nm process). Vega is planned as a tsmc 16nm part.

AMD confirmed glofo was the manufacturing partner in the launch slides and have more recently said they have also had some kit made by Samsung thanks to the joint process.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790

jdwii

Splendid


I'd still argue that its to early to tell but based on these numbers i'd say its worse then sandy-ivy. I truly do expect something is off. If not then what would be causing the lack luster performance?
 
@jdwii I think it's scaling with memory bandwidth (the i7 is using quad channel memory). That's the only thing I think can explain the results when you consider the quad core i5 and i7 with same core are so much slower...
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I think cdrkf nailed it! According to AoS developers the game is sensible to memory bandwidth and the reason why the 8000--9000 series performs badly on this game.

The 2AGUs on Zen already pointed the existence of some problem on Zen. Recall I had predicted 3AGUs for Zen. The lack of third AGU suggested a bottleneck in the cache/memory subsystem and this ES leaks points to AMD improving the cache/memory subsystem but still being very far from Intel.
 


Only when Intel moved to an IMC yes. Which I always found funny since AMD had a IMC longer than Intel so they should have had more experience.
 

con635

Honorable
Oct 3, 2013
645
0
11,010

Experience isnt everything, I think people would be more impressed with amd if they could grasp the fact that intel spending on r&d is 12billion vs amds 'paltry' 1billion, oh that 1b is to compete with intel and nv! Amd beating intel hands down anytime soon or ever under the current circumstance is akin to Jamaica winning the fifa world cup. I'm starting to agree with others when they say amds ip would be better used at a bigger company like sammy :/
edit: in fact the most exciting news this year is intel licensing amd gpu tech for their igp!
 


This is the thing though, AMD have had to be very agile and progressive with their designs to get ahead of Intel (on the occasions they have managed to). Intel plays it safer in comparison. A couple of times AMD's risky strategy have paid off in the past, and any good advances they make Intel implements into their own design but with significantly more time and R & D and as a result surpass them again.

AMD's Phenom was ahead of it's time (though sadly didn't have the clock speeds needed to outright lead Intel). The first gen i7 was almost a carbon copy of the top level design of Phenom, with Intel's better core and hyper-threading to added on. I honestly think AMD were *really* unlucky that the software at the time didn't highlight the inherent bottleneck in Core 2 Quad (having 2 dual core cpu's on die that had to communicate over a slow bus). There just wasn't the software to fully tax the 4 cores and Intels higher per core performance and better clocked put them firmly in the lead. Phenom was a better design in my opinion (just not implemented very well).

I think the fact is in a straight fight on 'equal' terms (i.e. no special tech advantage) AMD are going to be behind Intel. What's really hut them lately is many of their recent gambles just haven't paid off.
 


You forget that that 12 billion in R&D was not just for their CPUs. That includes their R&D for process technology, that AMD has not had for a few years after selling and spinning off Global Foundries, NAND technology, memory technology, Thunderbolt, wireless charging, Ethernet technologies (they were the major hands behind 10Gbe), wireless ethernet technologies. That is not just for theri CPUs. Intel has a major hand in most technologies people use.

It is why when people would say they buy AMD for moral reasosn to not give money to Intel they still do give money to Intel because again Intel has a hand in much more than AMD does, another example is USB.



I would say that if it had a higher per core performance that Core 2 was better. Barcelona (Phenom) wasn't bad but Intels Core 2 just had better IPC and thermal designs.

I am curious as to what you consider the first gen i7 to be a copy of the top level of Barcelona? Nehalem, first gen Core I series, was basically Core 2 in a single quad core die with an IMC.

Also how could it be a copy of Barcelona? Barcelona was launched in September of 2007 for server and November of 2007 for desktop. Nehalem was launched in November of 2008. That would mean that Intel would have either copied a design, thrown in their own tech, tested and started ramp up of a new CPU in less than a year OR that they somehow got their hands on AMDs designs years in advance and copied the ideas.

I say neither. We all knew that a IMC was an inevitable design evolution and Intel needed it in the server market because Barcelona was better in server markets due to the IMC which gave it better performance vs Kentsfield and Yorksfield.

As for "playing it safe", Intel does but time has proven that it works. Intel took a risk with many ideas. NetBurst was one. Turns out a Coppermine base tech was superior. Then there was IA64, which was much more of a risk than anything AMD has done. The only risky tech AMD has done that I can remember was Bulldozer with its module based design. The IMC was not a risk, it was smart.
 


Well on the top level Nehalem was very very similar to Phenom II.... monolithic quad core, IMC, level 3 cache... My point is not Intel copied AMD precisely but more than Phenom I was really the first 'modern' multi core cpu. AMD were ahead of Intel in terms of the design, it didn't work out however.

@gamerK316, things have very much changed. Quad core is the standard now and there is a lot of software which really requires a quad core to function well. What hasn't happened however is software scaling to any number of cores (besides a few very specific tasks), so there is still an optimum core count. That is likely to go against Zen as you really don't need 8 core / 16 threads for many applications yet.
 
If you go based on the basic desing, sure. But then again there were plenty of CPUs before AMD that had those.

IMC - Intel Timna was well before K8

L3 cache - Intel Pentium 4 EE (I had one)

However in terms of the back end I don't think that Phenom was more modern because to even this day Core 2 Quad is still faster per clock than Phenom II even.

However that is not to discount AMD. They are good at marketing their products when the time is right. I am sure if 64bit was more mature and easier to adopt to we might all be using IA64 instead of x86-64.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


AMD most successful designs weren't their own. And they only were ahead in a pair of occasions many years ago, when technology was simpler, the market was bigger and the R&D and resources gap between AMD and Intel was 10x smaller.
 


x86-64 is another example of a worse technical implementation winning out due to market forces. Intel was right to want to kill x86 in it's entirety; it's a horrid CPU architecture, and there's a reason why most other arches (Especially POWER based ones) offer more IPC, which as you all know is a bit of a thing right now.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Right. If you spend the money on R&D you can obtain results or not. But if you don't spend the money then sure you don't get results. Moreover AMD is not only limited by a huge lack of economic resources. There are technical limits as well.

Today the x86 architecture did hit both IPC and frequency walls. Older claims that Zen was going to be the new K8 and kill anything Intel including Skylake was plain nonsense. Even spending the same resources AMD was going to hit the same technical walls. Funny enough we, academics, have known the existence of those walls for a while. HP and Intel engineers also knew and that is why they used the transition to 64 bits to abandon the nonscalable x86 ISA by a new scalable ISA. They failed, but at least they tried. Maybe in a pair of iterations more and enough resources they could get something new to work. AMD in other side played it safe (and cheap) and invented a 64bit extension of 32bits. AMD essentially signed its own death then, but they didn't know. Being an extension instead a new separate ISA, the AMD64 ISA inherits all the legacy and weird stuff of the 32bit, which implies 64bit x86 processors are bigger, power hungry, costly and require more time to verify than other 64bit chips. This is the reason why Intel has failed in mobile even when they had a clear process foundry advantage over ARM competitors (22nm FinFET vs 28nm planar). The 64bit x86 processors cannot compete with ARM chips due to the "x86 tax".

Being an extension of x86-32, the AMD64 ISA has the same nonscalablility problem. Both are serial ISAs and the processors cannot extract enough parallelism from the codes to speed-up execution. Academics expected the limit to be somewhat around 10-wide microarchitectures. Current Haswell/Broadwell are 8-wide, but have difficulties sustaining 4 instructions per clock, whereas alternative microarchitectures build around non-serial ISAs can execute 20 instructions per clock or more.

AMD64 is considered today one of the wors ISAs on use. The born of AMD64 was an example of that we say in my country "pan para hoy y hambre para mañana" which translates more or less to "bread for today, hunger for tomorrow".
 


I wouldn't put too much emphasis on the lack of Zen+ on there- that is a short term road map and looks like Zen is a little later than they originally planned, which also suggests the follow up Zen+ is going to be a little later as well (i.e. it will be a 2018 product which this doesn't cover).
 

jdwii

Splendid


Well people on this forum already know that but that doesn't mean they aren't competitors which means the latest fabrication from both companies and Intel will be compared. IMO and probably the industry as a whole Intel has the best followed by TSMC and global foundary's is in last. Also its just my opinion on the manner but i personally think Globalfoundries is a joke(most here probably know that's what i think) ever since bulldozer constant delays over not being able to allow the frequency Amd wanted.

Now in 2016 i wonder how Amd's latest design would perform on TSMC and i also wonder if we would see a true high-end card from Amd instead of this mid-range stuff. Also i haven't been impressed with overclocking with the 400 series either.

It has to be about money or something but i can't understand why they keep going to Global foundry.
 


I would assume GF might be cheaper or is giving AMD a better price. That is it. SO far their 14nm has not proven to be as efficient as TSMCs 16nm. I am aware of the tech that NVidia has that gives their GPUs a bit of a power advantage but that would also not explain the power difference between the exact same chip for the iPhone 6s series.

That is the only thing about Zen I find worrying. I am sure the IPC will increase, I mean much like Netburst the only way to go is up. However AMD is moving back to a short pipeline design like Intel has and they used to have which alone means lower clocks. Ad in the current information we have about GFs 14nm and it looks like Zen will either have lower clocks or higher power draw.

While that is not detrimental to the desktop, not by any means except for low power designs like HTPCs and AiOs, it will be to the server market where the systems are designed around the TDP. I am sure they will gain some market share if their performance can match Intel or at least gets close and beat them on price but I don't think they will get as much as they could if they have the performance and the thermals.

Not much longer and we should know more.
 

Vogner16

Honorable
Jan 27, 2014
598
1
11,160


That's clear jimmy. Pascal is a simple overclock to Maxwell and their 16nm allows it to clock high and use way less power.

I think the reason AMD went with GloFo for GPU's is the APU business. remember how all steamroller and excavator APU's were 28nm? Remember how that was weird as Intel used 22nm? what was GCN made on.....

we know zen is 14nm and what do ya know Polaris is 14nm!
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


The answer is "WSA". When AMD sold its foundry business, they had to sign an agreement with Global Foundries that obligates AMD to fabricate a given volume of chips on Globalfoundries.

The WSA was renegotiated recently and now AMD uses Globalfoundries 14nm for CPUs, GPUs, and APUs

http://www.amd.com/en-us/press-releases/Pages/amd-amends-wafer-2014apr1.aspx

If I am not mistaken the WSA expires on 2028.
 
Status
Not open for further replies.