AMD CPU speculation... and expert conjecture

Page 302 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

8350rocks

Distinguished


Take a 1 billion transistor chip, with memory controllers and all the other stuff x86 has to make it work as efficiently as it does, turn it up to 3.5-4 GHz, and see if the power consumption isn't in line with what it is for Intel or AMD at those frequencies.

ARM's lynch pin strength was it's simplicity, if you up the transistor count, and complexity, you're going to increase power consumption. Which only validates my point that it doesn't scale upwardly like you think it will.

It scales in parallel well, meaning large clusters can do things well together; if you keep the design simple and don't bloat it with all the things that makes x86 best at what it does, it will use less power. If you try to compete with the x86 ISA, which is very mature, you're going to lose that race. It would take ARM 20+ years to reach that capability and compete in a meaningful way.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I have been asking how the Z3770 was able to run full turbo during the tests. Being giving more headroom than a phone did help, but this is an important fact that Anand omitted to mention in his _preview_:

These preview benchmarks were taken in a room controlled by Intel which was incredibly cold; it's described as "literally it was a refrigerator in there".

LOL

I know several of Intel benchmarking tactics, but this one is new to me. Annotated.
 

8350rocks

Distinguished


They've been doing that for years...many benchmarks from Intel are run in a room with an ambient of 15-18C (60-65F). They are wily, manipulative, and evil; however, they are not stupid.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Evidently ARM will be good for some things and bad at other, just as anything else.

With all my respect, I doubt that you have enough knowledge of the ISAs to say me anything that I will trust; only a pair of posts ago you didn't know what CISC means and I doubt that you knew that Intel/AMD x86 modern hardware really execute RISC-like uops before this was said here. Otherwise, you would not be trying to say us that CISC cannot be replicated by RISC.

But this attitude against ARM is not very different than I saying you for weeks (months?) that no Steamroller 8-core FX chip is coming and you claiming the contrary. Yesterday, when you did pm me about a supposed AM4 socket mobo that you believed that you had found online, I answered you with total respect saying that you merely found a typo on a website. I wonder who is here falling on deaf ears.



I already addressed this before, showing with real world data for a __single core__, why you are wrong about your ARM doesn't scale well upwards.

I find it particularly interesting your comment about going parallel. It is both Intel and AMD who are trying to compete in performance with Apple ARM dual-core design with x86 quad-cores. Also Nvidia high-performance ARM chip will be probably (if leaks are right) a 8-core chip, whereas AMD Warsaw are 16 core chips.

In fact, when you start saying pure nonsense as "It would take ARM 20+ years to reach" x86 you don't sound very different to that other guy who claims here something similar for AMD vs Intel.
 

8350rocks

Distinguished
Interesting read:

http://www.extremetech.com/extreme/167168-the-chip-that-changed-the-world-amds-64-bit-fx-51-ten-years-later/2

David vs. Goliath: It’s a nice story

One thing we know more about now than we did then is just how much pressure Intel brought to bear on everyone, behind the scenes. There were always off-the-record conversations with nervous motherboard vendors about why their AMD product samples shipped in plain white boxes, or why the motherboards lacked brand names. When SuperMicro introduced an Opteron motherboard in 2005, the company refused to acknowledge its existence. Intel’s own compilers refused to run SSE or SSE2 code on compatible AMD processors; applications would check for the “GenuineIntel” string when running these programs rather than simply checking to see if SSE2 was supported on the processor. That’s a particularly low blow considering AMD paid Intel for licenses.

In its 500-plus-page findings of fact, the European Union laid out repeated demonstrations of how Intel used predatory rebate practices to keep companies from carrying more than certain percentage of AMD hardware. The basic scheme worked like this: If an Intel chip normally cost $100, but you bought 90% Intel processors, Intel would cut you a $25 rebate check per chip at the end of the quarter. If, however, you sold 85% Intel processors, you got nothing. A company that sold 100,000 chips in a quarter and kept 90% Intel volume could expect a $22.5 million dollar rebate check.

In order to compete with Intel’s rebates, AMD had to offer an equivalent price savings, but on a vastly smaller number of chips. In one situation, AMD offered to give HP a million processors, for free, if it would use them to build systems. HP responded that it couldn’t afford to do so, because the total value of a million free processors was smaller than the value of Intel’s rebates. Whether or not this would have been found to be a violation of US antitrust law is a matter of conjecture, AMD and Intel settled their differences out of court. But Intel’s systemic sabotage of its rival undermined AMD’s ability to maximize its own profits during the 2003-2006 window.

 
So this is how it rolls, AMD did exactly the same as Nvidia with the new cards, added a few shaders, changed memory and core clocks, improved powergating and power saving features slapped it all together and there you have it, everything up to the R9 280X is basically a modified Tahiti, Pitcairn, Cape Verde, Bonaire etc. The R9 290X though is new and it is AMD's Titan per se and yes there will be no AMD dual GPU this round.

The Win factor lies in the fact that AMD have across the board smaller dies than what Nvidia have so bascially similar or slightly more or less performance but on less expense and circuitry Fus ro Dah.


AMD will have new drivers also to fix eyefinity and higher res bugs, also fixes the frame pacing further.

On other news its likely that Nvidia will not release a 790 as the Titan and R9 290X show that dual GPU's have hit their sell by date, to much power, to much heat, to many scaling issues and you need a mini nuclear power plant to run them.
 

8350rocks

Distinguished


I know what RISC vs. CISC is...ARM is RISC and x86 is CISC, I may have confused Complete with Complex...however, the end result is the same.

RISC runs differently, and yes x86 CPUs have had underlying RISC architecture characteristics for a long time. Since the Pentium days really; however, as was pointed out by Palladin earlier, x86 allows MUCH larger instructions to be converted into bytecode to be run on the CPU itself.

The added complexity of x86 was my point. That is easily it's greatest strength while also being an inherent weakness.

ARM's strength and weakness both lie in it's simplicity. That's why it excels for low power devices and simple things like Micro Servers. It's also the primary reason it's not a terribly valid Desktop uarch option.

In order to make ARM a serious x86 competitor, you would have to add several layers of complexity. That complexity would drive up transistor counts and die complexity, requiring quad channel memory controllers, and many other things that draw more power. Once you've done that, you have an x86 competitor...that no longer consumes power like a mobile/low power solution. Because the added complexity draws more power, your consumption numbers spike upward dramatically.

x86 ISA has been dealing with this in it's architecture since x586 (K6-2 days). ARM has not had the benefit of time spent tweaking their architecture for such added complexities, and it takes a *LONG* time to get that stuff right. Which is part of what we see in the current AMD and Intel uarch's, Intel has a far more *REFINED* uarch, because it hasn't changed dramatically since P4 days. Where as AMD's uarch is only 2-3 years old at this point, and not nearly as well refined and tuned.

So, what you're talking about with ARM taking over DT, or even a large share of notebooks, etc., will not come about for quite some time.

The reasons for this are simple:

1.) Most of the consumer DT world runs on Windows, like it or not, M$ still has some clout. They don't want to redesign their OS entirely for ARM, and WART is a terrible execution.

2.) With no major OS player taking ARM seriously any time soon, hardware advances will come slowly because it's not in high demand. Only way ARM becomes a big player is if some large player in the PC world backs it and pushes hard. AMD making Micro Servers using ARM is not that push into the consumer sector you expect. It's a gimmick to say "see, we can do low power better than Intel", nothing more.

3.) Without a Major OS player backing ARM, the development of consumer software will be slow. Open source groups may do something, but how has that worked out for Linux so far? Outside of Android, it's still not terribly popular as an OS in the PC world considering roughly 3% of the world is likely running it on a DT PC. I think Linux should see more use than it does, but in the consumer space, M$ is still king.

4.) As ARM adds complexity, it will add power consumption, and the more you need to be able to ask the ISA to do, the more convoluted the hardware, middleware, software becomes to do those things. So, as the hardware adds complexity, the power draw increases. Once you get ARM running at a 50W+ TDP, x86 becomes a clear winner. Which is what would happen with a billion transistor ARM chip running at 4 GHz.

So, you may not like what I am saying, and you may disagree entirely. However, your declaration that ARM will rule desktop any time soon is a mere pipe dream...much like Acorn was back when it first started. That's why it's a non-profit organization that designs cores and licenses them, not an organization making chips and selling PCs.

That's my $0.02
 

8350rocks

Distinguished


Nice...so we get new high end stuff, and the new 9870 is basically, now, a full blown Tahiti XT with some tweaks...

Interesting.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860


only one occasion ... includes ALL A7 vs Atom. yes AT claimed the ATOM ran at 2.4 ghz, then following that said that it its scheduler is fked up. Do you even know what that means? Ill try to put it in words anyone can understand, but you will still not grasp the concept.

It ran 4 threads on anywhere between 1 and 4 cores. When you run 4 threads on 1 core, thats the most inefficient way possible as the main thread is going to constantly be put in a wait state. This is clearly stated with the "there was some definite stutter" But I guess that was my imagination that AT stated that.

Aside from that, how are we supposed to believe any of the crap your pulling out of your rear orifice? you provide no links, make up phony benchmarks, change your stories, brag about other forum posts, have inside NDA information from AMD, and never say the same thing twice except "ERMAGO ARM OWNS ALL, WHY NO PERSON LISERN TO MA."

Give us a reason to listen if you want to be taken seriously. parroting marketing information isn't "proven facts".

come up with someting besides "ERMAGO YOU IMARINARY."
 


Wait, first Paladin agrees with me, now I agree with you?

...

ABANDON FORUMS!
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790

Agree entirely. But it is fair to quote also the neighbour paragraph:

That’s not to say AMD didn’t bear responsibility for its own situation. It’s not Intel’s fault that AMD paid twice what it should have for ATI, or that the company’s Fusion timeline had to be repeatedly restarted. AMD made poor design decisions with K10 and Bulldozer, and those aren’t Intel’s fault, either. However, it’s fair to say that if AMD had been making several hundred million dollars more per quarter than it actually earned from 2003-2006, these situations might have played out differently. So it goes.



thus, the words "Well I was not referring to the AT preview." didn't impede your rant on the AT preview. Why am I not surprised?
 
https://archive.foolz.us/v/thread/211165675

Second, it's customizable. The console myth-- that they're viable because the hardware is optimized, is just a myth. They're viable because Windows is unoptimized. Game engines on Linux are faster and more responsive than on Windows. That's not unexpected, though. You can find benchmarks that prove that Linux is better, they've been around forever. Things like Handbrake, the little screen capture test up above, and so on. Where it gets really interesting is when you swap out whatever the fuck you want for something else. Change the CPU scheduler for one that favors extremely low latencies, and suddenly the entire thing is 5-20% more responsive. We could change the way the kernel sends data to the GPU one day, and the users wouldn't notice the update. It's crazy. Then, consider this: it's the same thing consoles do. These sort of optimizations is what console fanboys brag about all the time-- that, because of the aggressive optimizations, console hardware is better than PC hardware.

BRING THE SALT!

Cheers! xD
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


The problem is not only in the word you used, but you seem to believe that CISC can do more things than RISC, because you seem to believe that CISC is more "complete".



And as pointed before those larger instructions will take more time to execute. I continue without being sure if you understand the difference between CISC and RISC. It seems you believe that CISC is faster, when both are complementary approaches to reduce the execution time of a program.



As mentioned before CISC had an evident advantages in the epoch when memory was exiguous. That time has gone.



There isn't anything in ARM incompatible with desktop or high-performance servers.



As mentioned before quad-channel ARM chips already exist. Also who said you that a high-performance ARM chip will consume the energy of a phone?



It goes just in the opposite direction. The easy way for ARM would be to wide the 32 bit arch, more or less like when x86_64 was conceived as an extension/superset of x86_32. However, ARM took a different way and spent many years to design a clean and elegant ISA which is not an extension of the 32 bit. learning from the mistakes and success of the 20 years history of 64 bit ISAs.

As a consequence the ARM64 ISA is modern, it is not an ISA of the year 2000 like albeit you seem to believe that.

In fact you spend writing about evolving by "adding complexities", when the new ARM64 also eliminates several things from the previous ARM32, for instance all multiple memory access instructions have been eliminated from the microarchitecture.



As mentioned before the scenario is not ARM+Windows but ARM+linux.



I don't know what do you mean by major OS player, but Google and Microsoft are taking ARM very seriously.

Who said you that the market will be pushed by AMD alone? What I said was that AMD+Nvidia+Apple+Dell+HP+Samsung+··· will do.



ARM and linux are currently the biggest piece of the overall market share. There are more people using Android than people using Windows.

Android is not designed for the desktop, therefore your argument fails. Also existent ARM chips were not designed for the desktop, therefore your argument fails again.



As explained above ARM64 goes just in the direction of simplifying things both at software also as at hardware level. For instance, the new ARM64 has been designed to be easier to implement in modern process technology than the old ARM32.

It was also shown before to you, with real data, that with each new generation ARM is more powerful without sacrificing efficiency. The new A57 core is much faster than the A15 core but maintains the same power consumption level.

It is funny that you pretend that x86 is a clear winner above the 50W TDP because AMD Seattle must be above that value and, still, AMD is replacing x86 with it, and saying us that all its future server will be based around ARM, with x86 server chips maintained only for customers slow to do the transition.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


When Valve ported the first game from Windows to Linux they found a relevant increase in performance. What more surprised them was that the game had been optimized for windows during years, whereas no such level of optimization had been made on the Linux port.

And additional performance advantages will come from SteamOS being optimized for gaming.

In one sense, this is not very different to why supercomputers use linux instead Windows 7 (sarcasm)
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


Where do you get all that from this roadmap? There are 2 tiers of product above the ARM device. AMD isn't going to abandon x86 on the low end either. They would be handing over all Win7/Win8/Win9 client sales to Intel.

pr-server-roadmap.jpg


 


what the hell is a 9870? :D



 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
Cazalan. As explained before in this thread Seattle replaces Jaguar based servers. Warsaw is released only for legacy workloads (that is why AMD didn't bother to use Steamroller) and Berlin will be aimed to push increase HSA (because Seattle is not still HSA).

The idea of that all future server products will be based around ARM follows from AMD secret slide, which I reproduced. I add it again:

why-arm-will-win.jpg
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


I welcome the day when I can switch to Linux for Gaming. It's just taking a REALLY LONG TIME. I will probably try SteamOS on my spare 128GB SSD but there's not much I would actually play.

The number of titles are increasing rapidly but they're mostly crap Android/iPhone class games.

http://steamforlinux.com/?q=en

Dota 2 is a huge one for Linux. That's one of the highest played games on Steam.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860


ANSWER THE QUESTION, what fake source are you referring, or did you make that one up too?
 

8350rocks

Distinguished


You're starting to annoy me, CISC can do more things than RISC can because of the SIZE of the instructions you can use. 64 bit ARM cannot process 256 bit floating point instructions. It's also not designed for complex instructions that take more than 1 clock cycle!

And as pointed before those larger instructions will take more time to execute. I continue without being sure if you understand the difference between CISC and RISC. It seems you believe that CISC is faster, when both are complementary approaches to reduce the execution time of a program.

CISC isn't necessarily faster...and OF COURSE they will take more time to execute, as they cannot be done in a single clock cycle...some of them take several clock cycles to run. This is an inherent limitation in RISC, you cannot get as complex in your instructions. Also, it depends on the instructions you're running...if we're talking 128 bit or 256 bit instructions...then the CISC x86 ISA is *FAR* better for that than ARM. It's not even a contest at that point.


As mentioned before CISC had an evident advantages in the epoch when memory was exiguous. That time has gone.

It *STILL* has clear advantages...I am beginning to wonder if you're hafijur's evil twin...just on the ARM bandwagon. You haven't read anything about the differences between x86 and ARM clearly.

Do you know the difference between RISC and CISC? Other than what Wikipedia tells you is different?


There isn't anything in ARM incompatible with desktop or high-performance servers.

Except that is doesn't have the capability to process long string complex instructions like x86...other than that issue, you're right.


As mentioned before quad-channel ARM chips already exist. Also who said you that a high-performance ARM chip will consume the energy of a phone?

There are *NO* hard numbers on these chips for power consumption at the level of complexity you're discussing, and I guarantee you, the TDP is going to be quite a bit higher than you think.

It goes just in the opposite direction. The easy way for ARM would be to wide the 32 bit arch, more or less like when x86_64 was conceived as an extension/superset of x86_32. However, ARM took a different way and spent many years to design a clean and elegant ISA which is not an extension of the 32 bit. learning from the mistakes and success of the 20 years history of 64 bit ISAs.

As a consequence the ARM64 ISA is modern, it is not an ISA of the year 2000 like albeit you seem to believe that.

In fact you spend writing about evolving by "adding complexities", when the new ARM64 also eliminates several things from the previous ARM32, for instance all multiple memory access instructions have been eliminated from the microarchitecture.

The age of the 64 bit ISA for ARM is irrelevant, my point is...ARM IS NOT REPLACING X86 ANYTIME SOON.

I don't know what do you mean by major OS player, but Google and Microsoft are taking ARM very seriously.

Who said you that the market will be pushed by AMD alone? What I said was that AMD+Nvidia+Apple+Dell+HP+Samsung+··· will do.

If by seriously, you mean WART...LOL!!!! Also, Android runs on ARM on mobile, but it's not a desktop OS, nor is it even close.

ARM and linux are currently the biggest piece of the overall market share. There are more people using Android than people using Windows.

Android is not designed for the desktop, therefore your argument fails. Also existent ARM chips were not designed for the desktop, therefore your argument fails again.

Right, but when you eliminate tablets and phones, who wins? We're talking DT PC, your argument fails because of false equivalency. Phone/tablet != DT PC.

As explained above ARM64 goes just in the direction of simplifying things both at software also as at hardware level. For instance, the new ARM64 has been designed to be easier to implement in modern process technology than the old ARM32.

And...? Linux has a better scheduler than Windows nearly across the board...it isn't dominating DT PC, and it's had a better kernel than Windows for YEARS.

It was also shown before to you, with real data, that with each new generation ARM is more powerful without sacrificing efficiency. The new A57 core is much faster than the A15 core but maintains the same power consumption level.

What real world numbers have you shown? There isn't an A57 out yet...if you're talking about the A7 from Apple...then this discussion is over. As you're yet again extrapolating DT performance from *MOBILE* chips. Which is a fool's errand in itself...

It is funny that you pretend that x86 is a clear winner above the 50W TDP because AMD Seattle must be above that value and, still, AMD is replacing x86 with it, and saying us that all its future server will be based around ARM, with x86 server chips maintained only for customers slow to do the transition.

If you're already burning that much power, why wouldn't you take the backwards compatability, and capability to run more complex instructions? There's no argument against x86 at that point...at all!

Don't act like I have no clue what I am talking about...you clearly reveal that you are not aware of all the issues at hand.

I want to hear *YOUR* explanation of the difference between RISC and CISC. I understand the difference...but the true question is...*DO YOU*?

 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


2015/2016/2017 is not "the long run" that is near term. Also that's completely ignoring the effect that Intel and Microsoft have on the marketplace. Combined that is a 400 billion dollar giant. The industry isn't going to just roll over and let ARM assimilate everything. Then there is China's state sponsored Godson/Longsoon chip that is MIPS based.

Also the glaring error in that slide. Customers don't give a darn about what the chips are. Most customers don't even have a clue. They just want products that run faster and better and cheaper than before. As we've seen with Bay Trail and Quark Intel is willing to compete on price as well. If Microsoft lowered their OS price down to $39 they could see a serious uptick in sales. Android is technically free but it's heavily subsidized by Google. That may force Microsoft to make a similar move.

With PS4/XBone being x86 that's at least another 6 years where x86 will dominate for gaming. Are there any Android/iOS games generating the 1 billion dollars GTA V just launched at?

Regarding Seattle we don't know what it has for GPU capabilities at all. It could be just A57 cores which that is all AMD has licensed. They're not a ARMv8 licensee like Apple is.

Also regarding profits, only Apple and Samsung are making a lot of money with their ARM based products. The rest of the industry is struggling even with cheap abundant ARM chips. It's not all roses for either camp.
 

8350rocks

Distinguished
*sigh* With all the stuff AMD crammed into their new card...I am going to have to buy a new GPU sooner than I wanted.

Hasn't been shown yet, but I am calling it here first:

R9-280 was $299

I wager R9-290 is $399 and R9-290X is $499
 



I bet you a can of beer 290X will be $549, and 290 will be $400-450, like the 7970/7950 setup in late 2011.
 

8350rocks

Distinguished


LOL...it could very well be $549...

EDIT: MANTLE is coming!!! That's going to take graphics to a whole other level! BF4 is the pilot game!!
 
Status
Not open for further replies.