AMD CPU speculation... and expert conjecture

Page 638 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I addressed your point by mentioning how a single memory pool avoid continuous copying/moving data between two memory polls in an old dCPU/dGPU architecture.

Not only game developers like unified memory polls, but the unified memory system of the PS4 was one of the more requirements made to Sony by the game developers.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
I recall perfectly the first time I mentioned that 64bit ARM processors would beat the best x86 designs. My claims were taken as heresy and then a crow started posting nonsense such as "ARM cannot scale up", "ARM is only an experiment for AMD", "ARM will never never caught x86 in performance" and the subsequent insults and other personal attacks.

Since then we saw A57-based Opterons beating jaguar-based Opterons and Nvidia Denver core beating a Haswell core allowed to consume two times more power, but the same crow continued denying the benchmarks and even pretended that the Denver-based K1 was a quad-core...

Nvidia launched ARM+TESLA hardware for HPC and promised that XGene ARM SoCs could be competitive with Intel Xeons, "that is all marketing" the crow said me together with the usual insults and personal attacks.

A vendor showed its ARM SoCs beating Intel servers in a live test at ARM Tech Con. The Sandia National Laboratories runs huge clusters of Intel-based servers but have been testing since March HP’s ARM-based system. They have found not only that ARM systems are competitive against Intel x86 servers but that scale better on their scientific applications because the ARM designs are better balanced than the x86 designs

ARM-vs-x86-scaling.jpg


Applied micro has also shown its servers in action compared to Intel Xeon consuming about the same power

XGene-vs-Intel-server-performance.jpg


I said that ARM is more efficient and I recall the crow of ARM-haters and AMD-haters saying me that ARM efficiency in mobile would dissapear when scaling up to servers. I said that the efficiency would reduce but would be still significant. That is that ARM servers would offers similar performance but cosnuming much less power than x86 server. They denied and attacked me with more insults, but any benchmark/demo has shown that I was right and that ARM servers are more efficient thanks to the ISA.

Of course, I am not saying that the first wave of ARM server SoCs will beat any x86 server on any benchmark. No. There are some benchmarks where x86 will be better and people will use them instead of ARM. However, ARM SoCs will outperform x86 in any benchmark in the long run.

What is interesting is that, as I predicted time ago, the first wave of ARM 64 bit products are competitive against Intel x86 because ARM scales better. And this is only the first wave of ARM 64 products. I am now expecting to the really strong players: i.e. XGene3, Vulcan, and specially K12.

I will update my expectations for K12 soon. I still maintain that K12 will be 20--30% more efficient than Zen but want change the IPC.
 

8350rocks

Distinguished
Juan, you show me one benchmark that requires floating point compute performance, or raw integer compute on a core per core benchmark where ARM beats x86 in raw compute.

I do not care one rat cent about PPW. I am talking raw compute.

EDIT: LOL @ Competitive with x86, in what? AnTuTu? Pleeeeaaaaaasssssssseeeeeeeeeeee!?! Let us look at something like non GPU accelerated rendering, or perhaps even non-GPU accelerated encryption, or we could look at Compression/Decompression.

You can pick the benchmark from those categories...
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780


Do you have any quantitative data to back that up? Also I think you are jumping to conclusions a bit. Even if what you say is true and ARM is beating cat cores, cat cores are not high performance x86 cores. At this point you might as well just compare ARM to original Pentium and then go "ARM beats x86 in raw performance!"

I would like to see some sort of quantitative evidence of the highest performance ARMs beating the highest performance x86 CPUs in a diverse range of benchmarks. And they should be of the same generation and general release date. None of this "the ARM coming out in 2016 will beat Haswell therefore x86 is finished" stuff you like to pull.

And as FX said, ppw is an important metric to some markets and in others it's not. If you only judge by ppw you're completely ignoring all the clients that would not care about ppw at all and only care about raw performance.

I am talking situations like GPUs. gamers grab high performance devices with high TDP and power consumption because ppw is irrelevant for them. If ppw were important in every single market, everyone would buy GTX 750 Ti and r9 285 because raw performance wasn't their first priority.

Your biggest area of neglect when it comes to devices are that there are situations where you're paying an employee a significant amount of money to do something. Far more than cost of electricity. And if you opt for something with less raw performance and better ppw in those markets, you're going to waste more money on paying employees than you will save in electricity. Really any job that requires compiling, rendering, transcoding, or other similar jobs that require something time consuming to perform, ppw goes out of the window.

http://xkcd.com/303/
http://phloatingman.wordpress.com/2010/11/08/video-rendering-xkcd-compiling-remix/

I realize those are joke pictures but in the end you do have situations like that and there are business suits crunching numbers and declaring that saving the electricity with more efficient parts is not as cost effective as having more performance at the cost of efficiency by reducing time you are paying employees to stand around and do nothing while the computer is busy.

I know your area of study is more focused on HPC and situations where ppw is important. But you basically just declare ppw the defining metric to measure a processor and then declare everything that doesn't agree with you invalid.

If you understand those other markets, you will understand why AMD is going to keep x86 cores as well as move to ARM. There are places where ARM thrives with ppw and there are places where 10% faster means saving hours per week.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860
I recall perfectly the first time I mentioned that 64bit ARM processors would beat the best x86 designs.

wow ... best 64 bit CUSTOM arm cpu beats the lowest x86 cpu in one benchmark utilizing a 1 ghz clock speed advantage, pretending that 4200 u is a quad core cpu. and exuse the fk out of me since NVidia doesn't have their k1 "Denver" posted on their page. I didn't say Denver, I said k1 as stated by NVidia.

seriously, why the k1 and k1 "Denver" from NVidia being 2 entirely different products? "And according to nvidia k1 is a quad core at 2.3 ghz. http://www.nvidia.com/object/tegra-k1-processor.html "

I didn't say k1 Denver did I. I didn't bother to look up that NVidia was using the same naming scheme. unlike you ill admit that, but I'm sure you will bring it up any time you try to address what I said kinda like you keeping a bookmark on the one person on this forum that paid you a compliement that you have reposted about a dozen times already.

Of course, I am not saying that any of the forthcoming ARM SoC will beat any x86 on any benchmark. No. There are some benchmarks where x86 will be better and people will use them instead of ARM. However, with time (in the long run) ARM SoCs will outperform x86 in any benchmark.

who knows, 50 years from now when intel is out of business arm may win in every benchmark out there.

Can we get an "ERMAGO ARM OWNS ALL" sticky and leave juan there? this has nothing to do with steamroller other than someone's delustions that if x86 stops advancing at all, then ARM will win one day. AMD will not own the ARM marketshare. AMD is going from competing against intel only to competing with an entire industry of ARM producers, QUALCOMM, nvida, Samsung, applied micro, apple, ect. This doesn't end with AMD >> all other ARM vendors.

ARM breaks software compatibility. its going to take years just to get the necessary software to enter the market full bore vs x86, even then as stated with windows xp vs windex, software compatibility drives the market. end of discussion.

call me a crow again, and try to pretend you don't insult people.

p.s. they always try to pretend that ARM>ALL ... when they lose, they change their statement ... ARM PPW > ALL PPW.
 


And you again ignore my point about having one memory pool increasing platform costs by going off tangent.
 

will samsung be alternative to glofo or tsmc or both? i thought that samsung would be a logical choice by extension since glofo announced the partnership with samsung on 14nm finfet process (thus adimitting it's failure with 14nm-xm).
 

jdwii

Splendid


Its all valid points but people shouldn't freak when security breaches happen then its as simple as that.
 

jdwii

Splendid


Really the 2600K or 8350fx is good enough to run most single GPU setups in future games. You might have to wait until Intel even makes it worth it to upgrade. 14-10nm.
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780


I assume it's not coincidence that this is when we're supposed to see K12 and Zen? All products manufactured on the same process. Makes a lot of sense for spending R&D money for efficiently.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Not a surprise. We have know the 14/16nm FinFET foundries roadmap since months ago, when FinFET production was announced for late 2015. And we also know that K12 and Zen are coming in first half of 2016.
 

h2323

Distinguished
Sep 7, 2011
78
0
18,640


It was only a few weeks ago that we got clarification from Rory Read that new arcs were Fins in an interview with Bloomberg, then around the same time an article quoted Keller regarding the same info. It was all just speculation before that regardless of the Fab road maps.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
Time to update my old beliefs/guesses for K12 since this is a speculation and conjecture thread, is not?

I expect a relatively small core, Jaguar style, but with improved IPC and frequency, and next features/performance:

  • ■6--7 mm²
    ■~40% IPC from core
    ■3.0--3.5 GHz
    ■dual-core CMP module
    ■4-issue
    ■2x128bit SIMD/FP unit
Other thoughts:

  • ■~20% IPC from improved LLC subsystem.
    ■Q1-2016 release
    ■20--30% efficiency advantage over Zen due to ARM ISA

If someone makes the math he|she can see I am assuming a K12 core will be about so complex (in transistor number) like a Steamroller CMT module.

In short: I am expecting a K12 core that will be at Haswell level on scalar performance, at Ivy Bridge level on vector performance, and with efficiency superior to Skylake. I expect Zen to be pretty similar, but with the disadvantage of the x86 ISA.

Disclaimer: no warranty, I can update my opinion without notice ;)
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I knew it before Rory Read announced it. This is why I have been saying it here that AMd was migrating to FinFETs for K12/Zen, whereas you needed to wait for official confirmation, and whereas 'experts' here tried to convince people that I was wrong because SOI was so gooooooood that even Intel was secretly preparing to abandon FinFETs by magical SOI. Their posts included dozen of useless links to SOI consortium propaganda and the usual insults against me. But now we are in that lovely epoch of the year when I laugh again at another of the failed predictions of those 'experts' :lol:
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860
Werent you ranting that it was going to be GF's superior technology and insulted me for suggesting over a year ago that AMD should start fabbing at Samsung, along with someone else saying Samsung is way behind GF?
 

8350rocks

Distinguished
Juan...

Honestly, this is the hang up with SOI, at the moment, in spite of the fact they would prefer to be on SOI.

1. IBM Fabs are the only fabs doing any volume production of 22nm planar FD-SOI silicon

2. They like the properties of the 14nm node better than the 20-22nm node and fully intend to entirely skip the process node.

3. By using Samsung, they can easily port the process to another foundry in a short amount of time if necessary, and can also share the production with partner foundries if necessary.

4. FinFET is a new direction they have not gone in, and they are curious to see exactly how the new uarch tapes out. They can fall back on FD-SOI FinFETs once the sub 10nm node is announced and FD-SOI is virtually required to go any smaller (which, btw, is almost universally acknowledged in the substrate industry at this time...).
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860
I heard part of the reason GF canceled 28nm SOI is they couldnt get yields above 10% on the risk wafers.

Not to mention that samsung is going soi since its such a useless technology.

http://www.techdesignforums.com/blog/2014/05/14/samsung-agrees-make-28nm-fd-soi/
 


That is rarely a problem.

I'm serious on this point, those ancient box's are rarely connected to the internet. Usually they are on a separate LAN segment from everyone else and have what's called a "controlled interface", basically a super locked down firewall between them and the rest of your LAN, not to mention being disallowed to go through your WAN firewall. Whenever a company needs to use such a risky device they do a security analysis and come up with a set of potential risks and actions to mitigate those risks, just randomly connecting something like that will get someone fired. The IT directory / CTO will lose his/her sh!t over that, too much valuable IP / data would be exposed which puts the company in a dangerous financial position. The only folks who really make dumb a$$ mistakes like that are the small business's without a dedicated IT department and no security folks. As much as I can't stand working with those oily bastards, they do provide a very important service to the company.

Also the vast majority of "hacks" are on internet facing webservers, not internal DC / DNS / Exchange / Sharepoint services. External facing webservers, by definition, need to be accessible to everyone on the internet, this puts them in an extremely dangerous position in that the number of attack vectors is many times higher then your internal services. Your perimeter firewall will / should drop packets destined for internal systems as there is zero reason for them to communicate directly. That same firewall needs to let packets destined for webservers pass and only rely on SPI, heuristics and pray you catch the bad stuff before they do too much damage.
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780
I thought GloFo was just copying Samsung process at this point and they cancelled what they had going?

But I think all of this fab stuff is going to tell us pretty blatantly why we didn't see a lot of AMD CPU products we were expecting. And some of you are going to be disappointed that it actually was due to lack of process as opposed to lack of demand.
 

I doubt AMD will be fabbing GPUs at samsung now. The talks are about cpu more likely.
 

jdwii

Splendid


What 2 months now and we should be seeing dgpu going away according to juan over Intel's superior iGPU? Its going to be amazing almost as cool as the A10=I5 2500K. Or even people excused at saying Arm can't scale up. Also its quite amazing how professionals claim Arm vs x86 means nothing anymore its about efficient designs.
 
Status
Not open for further replies.