AMD CPU speculation... and expert conjecture

Page 531 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

8350rocks

Distinguished


SMT can be CMT
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


As usual I will ignore your insults and your new attempt to discuss stuff that I never said. I enjoyed you calling me an AMD hater, when other people has noted that you only post anti-AMD stuff in this thread.

FYI, AMD is explaining why are replacing the jaguar cores by A57 cores:

The "Seattle" processor is expected to offer 2-4X the performance of AMD's recently announced AMD Opteron X-Series processor with significant improvement in compute-per-watt.

http://ir.amd.com/phoenix.zhtml?c=74093&p=irol-newsArticle_print&ID=1830578&highlight=

AMD has also given performance numbers for the new Opteron. When the jaguar core scores 7 then the A57 core scores 10. This is 43% more performance per core. Considering that the new chip scales up to 16 cores, the performance gap over the old jaguar architecture is substantial.

You can have the silly idea that that performance claimed by AMD is only in one special benchmark and that in rest of benchmarks and real workloads the jaguar core will be 204596.78 times faster than the A57 or some nonsense as that. You can believe in Santa and that the Earth is flat. I know the architecture and already predicted such numbers for A57 the past year.

Also, during last conference, AMD's Keller explained the advantages of armv8 and why the architecture allows for more performance than x86. The link to the video was given.

AMD claims its new K12 is a "high-performance" core, but you omit those words.

The extremetech article is right about IPC and its numbers agree with those obtained from the AT benchmark. Yes, that benchmark that you gave when you believed the frequency was 2.3GHz, but that you now avoid because it contained a typo and the frequency was 3.2Ghz.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I am sure he is not doing it.



Yeah, because the first computer in the world was x86 and all the software since then only runs on x86. For your information HPC code and server code have been ported from x86 to ARM... This is not anything new those codes were before ported from Alpha/MIPS/Power/... to x86, and were ported to Alpha/MIPS/Power/... before.

Apple has ported code to two different architectures and will port again when abandon x86.

Big companies as Microsoft, spending billions on x86 software have ported software as Office to ARM. Server OS and applications have been ported to ARM. Thus there is no problem here.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


SMT != CMT != CMP

b.jpg


And as found by Anand

CMT seems to be quite a bit more efficient than SMT.

http://www.anandtech.com/show/5279/the-opteron-6276-a-closer-look/6
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I am pretty sure that the claim was a bit different, specially since the 295x didn't exist then, :sarcastic: but in any case don't forget than the next year, Intel will be releasing a 'CPU' with a total performance of 3 TFLOPS. The 295x only has 1433 GFLOPS

For your information, 3 TFLOPS = 3000 GFLOPS > 1433 GFLOPS.

If you are believing that the CPU will be 2.09 times faster than the 295x, the response is no. Because the CPU avoids the bottleneck generated by the PCIe interconnect of the discrete card. Thus the CPU will be about 4--8 times faster in real workloads.

Check the next slide given at SC13 (SuperComputer13). Can you see the words "today" and "tomorrow"?

Xeon-Phi-Knights-Landing-GPU-CPU-Form-Factor-635x358.png

Don't forget as well the Nvidia Research labs quote. They expect discrete GPUs to disappear in the long run. I can reproduce the quote again, in case you need it.
 

h2323

Distinguished
Sep 7, 2011
78
0
18,640


I don't think the posts were unfair, your very well though and informative posts can be suffocating to other participants, keep them coming though good info. Lots of people defending amd from army silliness, nvid blindness, and intel lust, its coming from all sides.

 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


If you read what AMD's Feldman said about the new server strategy, he said explicitly that AMD is releasing the new Warsaw CPU (x86) only for customers that will be slow on migrating to ARM.

Amazon, Facebook, Google... all them are already porting software away from x86.

Personally I already agreed time ago that x86 will remain as legacy for some business and people.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I disagree and I am not alone. His first post received a couple of replies mentioning that he had been a bit unfair and that he had left out people. Do I need to repeat the names given?

P.S.: I liked the link about radiation. Your post was much more informative and useful than the "Aliens!" post from the expert with 'contacts'.
 

etayorius

Honorable
Jan 17, 2013
331
1
10,780
Can`t AMD and ARM come up with some sort of Emulator Language in the CPU`s so that ARM can read x86 performance? i know it will not run as efficient as original x86, but as long as they make it as fast as possible... say at least up to 30% slower in ARM then i am in, you can easily gain that performance back in 1-2 generations.

I don`t care what CPU runs my x86 old software as long as it runs in whatever CPU i am using.

I think i read somewhere that IBM was able to read x86 in one of their not so successful hybrid CPUs, and they got penalized slightly when running x86 but it was great none the less, i can`t recall if it was PowerPC or something else... i hope ARM and AMD manage to do this.
 

colinp

Honorable
Jun 27, 2012
217
0
10,680


Mobile Kaveri? Something less interesting?

Any way, it's good to see that AMD are starting to "get" advertising, after their cringe worthy efforts in the past.

Back off topic, Juan, I've basically started ignoring about 80% of what you write for various reasons outlined below, so if I miss a few of your links then forgive me.

  • ■ Sheer volume of posts
    ■ The size of your posts
    ■ The boring repetitiveness of your posts
    ■ The tone of your posts
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860


let me give you a history lesson

http://semiaccurate.com/2013/05/28/kyoto-becomes-the-amd-opteron-x-series/

AMDOpteronXSeriesVersusIntelAtom.png


Now that AMD's marketing team is trying to push ARM being the new thing, Opteron x's single thread performance is only 7.
 

$hawn

Distinguished
Oct 28, 2009
854
1
19,060
Excuse me please, I need to rant.

AMD is making a huge mistake by not supporting Windows8, with it’s latest 14.6+ drivers. In first world countries, upgrading to Windows8.1 must not be an issue at all.
However, I think it’s time companies think about their customers in Asian countries as well. I live in India, and the 3GB online upgrade to Win 8.1 is not economically feasible for me. My entire month’s data usage limit itself is 3GB!! The very idea of performing this online update on a 2Mbps connection is ridiculous.

I do have friends who could possibly download a 3GB update over a few days, if Microsoft would provide an offline Win 8.1 upgrade DVD for Windows8 OEM versions. Unfortunately, those bird brained people at Microsoft do not.

I wish I had purchased an nVidia GPU. Wouldn’t have to deal with this headache! Feeling betrayed :(
 
Software can be quite easily recompiled to run on different uArch when needed. There is a large section of typically x86 code already moved over (primarily relating to servers / linux). Windows RT is also a shared code base with windows 8.X for x86, the problem is legacy software support. That however is going to go away over time imo- how many of you use software from the 90's any more? There are still some large organisations utilising custom software that is pretty old but I think eventually it will get updated at which point the underlying platform becomes irrelevant.

No, it won't.

First off, what OS are you going to run on? Not WinRT (which has failed in predictable fashion). So you immediately have more then a recompile; you have, at minimum, a re-write for a new OS and a new CPU architecture, which essentially means re-developing all your software from scratch. Which is not attractive for business unless they can be guaranteed to recoup the development costs in sales. Nevermind needing to support two significantly different versions of the product over the long term, which means more employees and more costs.

I mean, sheesh, my workplace still has a 3.1 PC running (and has spent a couple grand keeping it alive) because there's no one who is willing to take the months to get a critical piece of software re-written for Win95 (never mind XP; Vista/7 isn't even considered here). You think doing a X86/Windows to ARM/Linux(?) port is going to be any more attractive? Heck, we only shut of our last VAX/VMX workstations early this year, and those have been obsolete for at least 15 years.



That costs die space, and therefore, money. Intel tried this with Itanium, and X86 performance was less then running X86 native code. That's one reason Itanium never took off. You say you're fine running your programs on whatever chip, but are you still OK if you take a 25% performance hit over what an X86 processor at the same price point would give you?
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I am sorry to hear that, but not only Microsoft is forcing users to update from 8 to 8.1, but it is also forcing users to update to last version of 8.1, leaving rest of users unsupported and abandoned

http://blog.rjssoftware.com/more-knowledge-more-security/leighs-security-tips/windows-8-8-1-users-potentially-left-stranded-unsupported-microsoft/

http://windowsitpro.com/windows-81/windows-81-update-1-mandatory-update-future-security-update-offerings

http://www.fool.com/investing/general/2014/04/25/why-microsoft-is-already-dropping-support-for-wind.aspx
 

etayorius

Honorable
Jan 17, 2013
331
1
10,780


There will be Pros and Cons to this, Flexibility is the key point here... an issue could be taking a small performance hit, but no big deal new generations of ARM seem to hit much sooner with much greater performance boost compared to previous generations than x86, so yeah i would buy such an ARM CPU with the flexibility to be able to run x86, i rather loose some performance than not being able to run x86 at all, anytime.

What ever happens, i am giving up of my old x86 software EVER.
 


And how would that performance be made up? Itanium couldn't do it. The translation process really can't be sped up much without throwing a bunch of dedicated HW into the mix, which again: Space, Power, Money come into play. Based sorely on the cost of doing the conversion, I'd estimate a permanent 20% penalty, that will never be significantly improved. And that assumes that ARM chips have all the HW assets a typical X86 chip has. But if the ARM chip in question has to brute force AVX2 instructions because it lacks the HW support, well, penalties could be much, much higher. Memory access differences between the architectures could also be a problem.

Nevermind that no one outside Intel really knows how their out-of-order scheduler actually works, so unless someone can reverse engineer that, you'd have to emulate instructions in-order, which will cost a TON of extra performance on top of everything else.

And of course, if the ARM chip remains weaker then X86 (as I expect it will be), then a 30% hit on ARM performance doing X86 emulation could lead to larger then 30% performance reductions, since you have to factor in ARM's relative power to X86 as a starting point.

So no, X86 emulation on ARM isn't happening. End discussion.
 

8350rocks

Distinguished


Look, CMP is multiple chips...

SMT is simply Simultaneous Multi-Threading. Which means, by definition, any processing method which allows processing of multiple threads of code simultaneously.

Now, CMT does, in fact, satisfy that definition. It may not be the same way Intel does it, however, CMT is a subset, or specific method of accomplishing, SMT.

It is called reading, and you obviously think you are better than everyone because you are not able to do so and have not learned the reality that you are not.
 


CMT and HTT are both forms of SMT. The end. Why is this even under debate? The fact CMT and HTT are implemented totally differently is irrelevant; both satisfy the definition for SMT.
 

etayorius

Honorable
Jan 17, 2013
331
1
10,780
End of discussion? like you think you can just shut people up because...? oh it will happen if ARM wants everyone to dump x86, as much as i hate Juanrga and his ARM topic i got to say that ARM IS RELEVANT in this thread like it or not, and i will give you a few key points:

AMD gets ARM license, working on a x86/ARM Hybrid and new k12 with x64
Samsung gets a license and releases their Exynos
Apple starts testing their ARM Based MACs
nVidia got a license and been working on their Tegra APUs
Intel gets an ARM license?!
Everyone is on board with ARM licenses...

When will it happen? I HAVE NO CLUE and not very very soon. Don´t get me wrong, i really dislike this constant ARM rants over and over, but x86 on ARM will happen if ARM wants to steal x86 cake for itself, just because you say "it can`t be done" it won`t... and you been seriously wrong failing at your latest predictions lately, such as with "MANTEL" and TressFX and DX12.

AMD just announced they will show a new version of TressFX to developers along side Microsoft... GameWorks has something similar or even better than TressFX but guess what? TressFX is open source and both AMD/nVidia can use it on their GPU`s with DirectCompute5, so just make a quick guess and think which API/Engine Developers are going to start using from 2016 and forward? "MANTEL" and TressFX which are open and could work on nVidia hardware OR GameWorks which is Propietary with Closed Libraries and only runs on nVidia? all i can see from nVidia since the announcement of MANTLE is massive blood aching butthurting crying from them, they seriously miss the train this generation with the consoles and you can really tell by their actions and answers, like the miracle drivers that claimed more performance than "MANTEL" and turn out it was a damn lie and only worked on SLI situations... even with GameWorks on WatchDogs Radeon cards seems to be smoother and with less microstutter while having less much less FPS than the GeForce cards (talk about 600 series vs HD7000) that was unexpected.

So yeah, x86 on ARM will happen... everyone is going to ARM... heck i think even the ARM/x86 Hybrid from AMD may do this... oh boy.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


This thread sure has got quicker to read since I figured out how to get adblock to remove certain users posts.

I don't see any ISAs disappearing besides IA64. Samsung joined OpenPower, Nvidia joined OpenPower. You got Qualcomm and others getting back into MIPS. China has a state sponsored MIPS chip. If you want competition having 1 ISA is a major no-no. Heck there's a new x86 maker Rockchip to throw in the mix.

Intel even working on a new CPU line. Maybe trimming the fat on the x86 ISA. http://semiaccurate.com/2014/05/29/intel-new-line-cpus-works/

Hopefully software translation will get easier as companies become more ISA agnostic. The crux though usually is on the hardware drivers, which depending on the size of your company you may not have control over. At least that's been the biggest problem the company I work for has. The vendor may have great support for Windows but the Linux drivers are junk. Tablets/phones have had it easy because all you plug into them are simple USB devices. Come into the PC world where there are thousands of PCIe devices that we can plug in. Not so simple anymore.
 


Because executables are compiled into native CPU code. Hence why GPU's are automatically second class citizens. Unless you have CPUs and GPUs running the same exact ISA, GPU's will ALWAYS be secondary to the CPU. Even in HSA, you still need the GPU drivers to handle the workload, which means CPU intervention to get the GPU to do anything.

Secondly in the Desktop, X86 isn't going anywhere. Everything is being done in X86. There's no major OS that supports ARM (Linux does it, kinda, but its very immature compared to Linux X86), and no software to run. Every CPU maker could put out ARM chips (Remember the Intel StrongARM?), but no one would buy them, simply because there's nothing to do with them.

Your mindset is essentially "if we build it, they will develop for it." or "one killer app will get people to buy it". Ask Nintendo how well that's working for them.

Consumers follow the product. The products they want are on X86, so they go X86. Its that simple.

Seriously, this is PowerPC all over again. I had this same, exact, discussion almost two decades ago, and guess what? X86 is still here, and PowerPC in Desktops is an afterthought.
 
Intel even working on a new CPU line. Maybe trimming the fat on the x86 ISA. http://semiaccurate.com/2014/05/29/intel-new-line-cpus-works/

Summary? I wouldn't be shocked if they tried removing the older parts of X86 such as 16-bit processing now that Windows no longer supports it. At least on some subset of chips. More likely Intel is moving on from Core and building something new.
 
Status
Not open for further replies.