Discussion: AMD Ryzen

Page 25 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


No, Intel's R&D is higher so they have nothing to worry about IMO.
 


Historically speaking...

Intel has always had a much larger R&D budget...even when K7-K8 were out and killing Intel across the board...Intel was burning 10 times the cash AMD was on trying to figure things out.

So...just because Intel has more R&D money does not mean anything.
 


Precisely...it was a tech demo.

That was not meant to be any kind of point of reference for a specific performance comparison.

That was essentially proof of concept.
 


And to be fair AMDs success with K8 was largely due to two things:

1. NetBurst, had Intel kept Coppermine and enhanced that instead they wouldn't have been as far behind.

2. The work of DEC on the Alpha that led to AMD getting a IMC helped.

That said, it was not a 100% across the board as Intel did still have a few things it performed better at, mostly due to instruction sets.

R&D also does mean a lot. How many times has AMD been ahead of Intel in process technology? How many CPU lines has AMD had that have completely destroyed Intel?

K8 was a great uArch but it was the perfect storm. If Intel didn't launch NetBurst and went another route it could have been a different story. Just an example, the Pentium M was the first CPU based on the enhanced Coppermine uArch. If that launched instead of Netburst AMD would not have been as far ahead.
 


Yes but at that time Intel wasn't spending WAY more then Amd just more. Intel is simply tossing money like crazy to stay number 1.
 


Nothing new though. Although there is some info that may be wrong. It is WCCFTech after all.



I wouldn't say that but rather that costs have increased and continue to increase exponentially with new process tech now. Back then it wasn't as much because shrinking silicon was much easier. Now it is getting harder and harder so costs go up as they have to try more methods to get it to work properly.
 


Research into FABRICATION could put intel far ahead, that is really it. And in spite of their budget they don't seem to be making enough inroads in that direction to put them devastatingly ahead of AMD, it just slows down more and more. And will continue to be like that.

As for the rest what a laugh. Their 'innovation' and 'research' since the first core duo came out have barely increased anything in real terms in that time.

Sure you can show a 20 core xeon with 25000 units in some artificial benchmark and 10 core consumer model not far behind but in the real world they are not really that much better and that is comparing to AMD chips with ancient fabrication technique. Not to mention the absolutely incredible gap in performance per price!

I have in the past always been an Intel fan but when I recently started to think of an upgrade to my aging i7 920 I was just blown away by how little I'd get for the same dollar amount. I'd basically have to pay more that I payed before, mostly for 'on paper' performance...as if all the games use all your cores and all the adavanced SIMD instructions. Ha.

Phenom was a great design for example, but it doesn't benchmark well. AMD has always been way better than Intel when it comes to chip design, their only problem is they don't have fab any more and they have spent a very long time on zen (which looks to have payed off).

There's only so much efficieny you can get, so many instructions per clock. For a generic application then branching nonsense that intel spends so much time on can be helpful, but it is useless for complicated calculations (ie as in a game). So mainly you are just paying more money for a benchmark number.

Just like earlier in the thread people were surprised to see the zen results for blender. Well, that is based on real calculation, that is the real issue. Now that they have redesigned and gotten things where they should be in efficiency it will be extremely hard for intel to pull ahead much. They might stay ahead in bechmarks by playing the same game but by playing that game they also make it impossible to make big real world gains, and they are also approaching the limits of what they can do in that direction anyway.

And a big redesign like that, who will do it? No one there has the experience any more. They may have way more bodies but it is like the mc donald's of tech companies at this point. Simply hiring a bunch of bodies from china and india isn't going to make you able to design some new super high tech innovative chip. If it could then they would already have done it 30 years ago.

 
AM4 Socket and back side of of AM4 cpu. It has 24 PCI-e lanes but the CPU has dedicated lanes that doen steal from other devices.
AMD-AM4-Socket.jpg

AMD-Zen-Bristol-Ridge-Chip-Backside-840x539.jpg


http://wccftech.com/amd-am4-socket-zen-bristol-bridge-soc-package-pictured/
 

I find its pretty easy to fix the pins with a credit card. When cleaning always do so while the CPU is in the socket right after removing the heatsink.
 


The issue is that on the consumer end performance increases are not as much. In other roads it is. While we may not utilize AVX 2, servers can and it does introduce a massive boost.

While a i7 920 is still a decent CPU, it is easily outpaced by a current i5 or i7. Not a ton like say a 920 vs a Pentium 4 but still enough to notice.



The biggest benefit to LGA is being able to put more pins in the same area, that is why Intel moved to it.

That said, I have also repaired a few LGA sockets. Just like with pins it depends on how sever the damage is.
 


I agree wholeheartedly. And I would also add that Intel misread the mobile market and has been playing catchup ever since. They've poured a ton of money into Atom, Core M, and integrated graphics. Their R&D budget is spread out quite a bit these days.
 


A repetition of what has been known for a while more the usual WCCFTECH invention/hype/nonsense. An example? This

The new motherboards will also be compatible with AMD’s upcoming Raven Ridge APUs. Which feature up to four Zen cores, an integrated GCN GPU and High Bandwidth Memory.

There is no HBM on the Raven Ridge APUs. Precisely that is the reason why they have only 12 GPU cores.
 

Looks the same as the pins I reports from an earlier source. I had to edit it becasue the img wouldnt' work. Third post from the top its about an X4 athlon 950 but im unsure what cpu the WCCFtech cpu is from. Most likely from one of the other CPU's AMD launched on the 5th.
https://gigglehd.com/gg/files/attach/images/13773/514/311/ee1f5ff64f009b85b13f891389aeacf5.jpg
 


It is false that "Intel was burning 10 times the cash AMD was" then

ycharts_chart_AMD_vs_INTC_zpsccd1f993-1.png


And I will not even try to explain why AMD K7/K8 success was more a consequence of Intel mistakes than of AMD right guess.
 


What is "incredible"?

(i) CPU performance is not a linear function of costs. A CPU with 40% higher IPC doesn't cost 40% more dollars to design and fabricate, because much more than 40% extra transistors are needed. In general

IPC^2 ~ Area ~ Cost

(ii) Leading fabrication processes are expensive, it is not the same a pure 14nm FinFET node than a mature 28nm planar node. Precisely the reason why foundries like Samsung/Globalfoundries and TSMC are introducing hybrid nodes with 14/16nm BEOL and 20nm FEOL is for reducing costs (at the expense of reducing performance compared to pure 14/16nm nodes).

(iii) AMD has a huge debt. Due to lack of competitiveness, they have been selling CPUs at low costs and this generated red numbers in the division.

Using (i)--(iii) I think that Zen will be expensive. I expect octo-core Zen to be priced somewhat around six-core Broadwell.



The only that has surprised me is seeing how a huge part of the Internet has accepted the marketing demo uncritically; what is more, the same people seems to ignore that AMD also used Blender to advertise Bulldozer. Before launch AMD used Blender to claim it was faster than Sandy Bridge. Third party reviews of Bulldozer showed a different picture.

I know Blender is a best case scenario for Zen, not only it uses one-half of the width of the SIMD units on Broadwell, but the Broadwell chip was underclocked for the demo and put into a "similar configuration" than Zen, with AMD refuting to answer us what that did mean.
 


Paragraph Edited Out.

Perhaps you could be so kind as to point me to one of your original posts that isn't a denigration of something posted by someone else., because I have yet to see one.

Keep this discussion civil. Personal attacks are Completely off limits.
 


I have made a comment about WCCFTECH and why they are again posting stuff they fabricate out of nothing, like when they said that Carrizo APUs on 28nm were coming with HBM inside. WCCFTECH has zero credibility.

Most readers here will agree with me about my opinion of them.
 
I have to say that the Internet's acceptance of the demo is mostly because Blender is not a synthetic workload, it's actually a good representation of what the prosumer market does. So even if Blender is a best case scenario for Zen, it means that it will still be good for the prosumer editing market.
 


1. It doesn't mean anything. The initial test ghz is nothing to do with the ultimate clocking. At the very worst case they can use the same die size with a smaller process and get better clock ratings. They would never do that but my point is it's ridiculous to think that you will go down in process yet be unable to keep the same clocks. They can, always. They just don't always choose to because they want to cram more and more cores (and chip yields) out of the same die size to make more money.

2. They understand what you don't seem to, that outside of fantasy world almost no software is using a bunch of vector extension stuff. Most games don't even use multithreading to any appreciable extent, just a few more recently have done it at all (and mostly badly). Intel also keeps you from running them in many circumstances anyway, because if you do it on all cores at once it will melt the CPU.

3. Intel is a bloated pig of a company and incredibly large and inefficient and wasteful. That is the long and short of why they charge such outrageous prices - basically because they can, and because they have to to keep feeding their costs (which are nothing to do with outputs). What do they do with it though? Why is it that other companies that have like 1/10th the budget thrown at issues are able to come up with fab processes not far behind intel? I have to laugh when I hear people talking about best engineers in the world. Not even 30 years ago let alone today when most of the company are straight from bombay tech.

 


1. It means a lot of. It means the silicon they have now cannot get higher clocks. It is not true that they can use "a smaller process and get better clock ratings." The only FinFET process at Glofo is 14nm. The 7nm node will not be ready up to 2019 or so, and even if Glofo had a smaller node now, AMD would have to port the design to the new node, make new masks, make new samples, make new verification,... this all means dollars and further delays.

You claim "it's ridiculous to think that you will go down in process yet be unable to keep the same clocks", but that is exactly what happened with the transition from Richland (32nm PDSOI) to Kaveri (28nm bulk), reduction on official clocks and a huge reduction on overclocking headroom. Moreover, the 14nm process node for Zen (14LPP) is optimized for efficiency not performance, not to mention that AMD confirmed that Zen is implemented using the density-optimized version of the libraries.

What you claim about "cram more and more cores (and chip yields) out of the same die size to make more money" is exactly the contrary to what AMD has made with Zen. Zen is a much more complex core than Piledriver, Steamroller or Excavator. If they went only lots of cores, they could port Excavator to 14LPP, they would have about twice the number of cores than with Zen.

2. Vector extensions are used a lot of. That is the reason why Zen core has 2 vector units of 128 bits each. AMD also considered relevant to provide full support for 256bit software since Excavator; Zen does by fussing out the pair of vector units into a single 256 vector unit.

3. You say "Why is it that other companies that have like 1/10th the budget thrown at issues are able to come up with fab processes not far behind intel?" This is another plain wrong statement. TSMC and Samsung are very important companies with lots of money and resources. In fact Samsung is about 5x bigger than Intel both in dollars and number of employees. Globalfoundries is using the 14nm fab process developed by Samsung.
 
Status
Not open for further replies.