AMD CPU speculation... and expert conjecture

Page 131 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

so far only intel has that design: haswell, and ivy bridge(cherry picked skus), and atoms (sans the igpu).
amd has brazos and jaguar but nothing from high performance (apus, cpus) that scales down to tablet level.
nvidia has tegra but that doesn't scale to high performance.
imo among this year's contenders, only amd has the balanced cpu+igpu solution, but intel has better full-blown soc design (wifi, security, camera, modem etc).

damn right. performance per watt is a b.s. metric that no one cares about and desktop and retail cpu sales drive revenues. :whistle:

edit:
looks like richland..
http://www.techpowerup.com/183635/msi-gx70-gaming-series-notebook-combines-latest-amd-hardware-for-crysis-3.html
 

jdwii

Splendid



After fixing some crappy quotation from it quoting me i must say its still a huge disappointment any way who even wants Intel on a laptop unless used for high end gaming or high-end work anyways? So i guess talking from a PURE Power efficiency level we don't know yet but i honestly thought a tock was more about performance and a tick was about increasing the efficiency no?

Thank you de5_Roy Performance per watt is another way of saying 15% increase in performance per watt which could mean just 5% improvement in performance and a 10% improvement in Efficiency not a bad metric by any means but that is not what a Tock was about at least i never thought it was i thought it was Intel trying to get a bigger speed boost and after all the hype around its graphics performance well enough said.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


AMD lost its exclusivity contract with Cray now that Intel did buy part of Cray. Therefore Opterons will continue being used on a possible upgrade of Titan supercomputer, but not on the future Cray supercomputer which will use Intel Xeons.

I don't think x86 Opterons/Xeons are the future. ARM offers better performance/watt that both and AMD plans to dominate server market by introducing future ARM-based Opterons.

I think AMD plans to introduce an 64 bit ARM could put them in a similar situation how when introduced 64 bit X86 in the market :)

The Mont-Blanc project is building the most powerful supercomputer ever and they are using ARM chips! I could easily imagine AMD providing ARM Opterons in future Crays.
 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810


A recent benchmark by Phoronix tested this hypothesis.
He found that Ivy bridge offers better perf/watt than ARM . I fully expect that Haswell will significantly improve this.



Edit : http://www.phoronix.com/scan.php?page=article&item=phoronix_effimass_cluster&num=16
This was done in june 2012, with Ivy Bridge Vs. ARM9 .

Is ARM15 better than ARM9 in per/watt ? IIRC, Anandtech in their Exynos5 dual review, said that it offered thrice the perf for twice the power. (or was it twice the perf for thrice the power ?)

Even ARM is finding that increased perf comes at watt disadvanage. Even in 2013, all of the top smartphones use a hybrid A9 and A15 chip (Apple SWIFT and Krait) . The one device that uses A15 (nexus 10) is poor in battery life.
 


We talked a little about this a while ago, and yeah; ARM15 is a big step in Perf/Power over ARM9. But use the Samsung Exys for that; they had better ramps it seems and their SoCs perform better than the other ARM15s out there.

Cheers!
 
A15s aren't that much better for performance/watt from what Tegra 4 is looking like. To scale to performance of a super computer, the number of cores and nodes means x86 will command a huge lead. ARM will be good in microservers and file systems but for HPC, Im guessing x86 and what ever is left of power will the the cpu choices along with GPGPU.
 

Blandge

Distinguished
Aug 25, 2011
316
0
18,810
After fixing some crappy quotation from it quoting me
Yeah there's something wierd happening with the forums.

i must say its still a huge disappointment any way who even wants Intel on a laptop unless used for high end gaming or high-end work anyways?
A lot of people. Intel doesn't have 80% market share because they are only good for "high end gaming or high-end work." For the vast majority of users, even HD 3000 graphics is sufficient for how they use their computers.

So i guess talking from a PURE Power efficiency level we don't know yet but i honestly thought a tock was more about performance and a tick was about increasing the efficiency no?

Thank you de5_Roy Performance per watt is another way of saying 15% increase in performance per watt which could mean just 5% improvement in performance and a 10% improvement in Efficiency not a bad metric by any means but that is not what a Tock was about at least i never thought it was i thought it was Intel trying to get a bigger speed boost and after all the hype around its graphics performance well enough said.

No. This is completely false. There's no single thing that ticks or tocks are "about". Intel sets design goals for their process (ticks) and for their microarchitectures (tocks). In the past is was generally a push towards higher performance, but when Intel (Along with the whole computing industry) came to realize that the rate of power increase generation over generation was unsustainable they started hitting the breaks with Sandy Bridge and Ivy Bridge. Eventually it became apparent that mobility was the next big thing, so Intel began to push to aggressively lower PLATFORM (not just CPU) power consumption, and now we're finally seeing the realization of this paradigm shift with Haswell and beyond.

Additionally, Intel's graphics are on a completely different cadence than their CPUs. Intel Iris (Haswell) adds more compute resources, but there is no major overhaul of the graphics engine. The last major microarchitectural advancement of the graphics unit was at Ivy Bridge which was a tick, so to use that as evidence for tocks somehow always being about performance is simply incorrect.

Also, your math is wrong. You say 15% perf/watt increase is equivalent to 5% increase in perf and 10% increase in efficiency (I think you mean 10% decrease in power, because perf/watt could be considered a measure of efficiency).

Lets assume we have a system that runs at 100W and scores 100 on some benchmark. Perf/Watt = 100/100. Let's increase performance by 5% and decrease power by 10%. New perf/watt = 105/90. Which is a 16.7% increase in performance per watt.
 

Blandge

Distinguished
Aug 25, 2011
316
0
18,810


Or do they have a monopoly because they have 80% of the market?

But I understand the point you are trying to make. Beyond the merits of technical specs, Intel also has a stronger brand, better marketing and enough volume to keep the channel full. My point is that Intel's CPUs are not ONLY good for high end workloads like jdwii said. That's an exaggeration. You get better CPU performance for adequate GPU performance. The opposite can be said about AMD APUs.
 

8350rocks

Distinguished
Yes, IBM POWER architecture, and the new SeaMicro system from AMD (using opterons) are the 2 most efficient power/performance servers at this time, with AMD being most efficient and POWER being 2nd. Look at the SeaMicro 15000 Opteron solutions.

Ironically, it seems as though AMD's worst performance per watt product is the desktop FX series. Not that I mind, the difference isn't huge, but I can imagine that the power consumption will come into the similar range as the other products sooner than later.
 

Blandge

Distinguished
Aug 25, 2011
316
0
18,810


Also this for HPC: http://www.green500.org/lists/green201211

 
They were the big dog before AMD was even made. They had a tight control of the market and have more capital to invest in all areas than AMD. They sold more p4s than AMD ever did sell athlons. They make more money. They can pay for money R&D, they can pay for more advertising and they can pay to keep AMD out.
 

Mitrovah

Honorable
Feb 15, 2013
144
0
10,680
the constant revolving door of news about the amd apu's makes me wonder: should I wait for the Kaveri apu instead of hopping on the rich-land bandwagon when its released in june?
 

Blandge

Distinguished
Aug 25, 2011
316
0
18,810


AMD was founded less than a year after Intel. Source: Wikipedia
 

I wasn't aware than AMD was founded before they started making x86 processors. But the point is. Intel had the 8080 and AMD was only there to make sure IBM had its supply. Intel was the major player in the market and controlled pretty much the entire market since AMD started making x86 cpus. AMD is 1/50 the size of intel and that has pretty much been the case since the 80s. Intel only keeps AMD around for the same reason microsoft kept apple alive when apple was about to keel over and die.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810
Really need to look up the definition of a monopoly. There's a whole generation of kids that couldn't give a rats arse about the home PC. They're too busy on their tablets, smart phones and hand held gaming systems.

Everything I needed a PC for in college can be done on a cell phone now. There are specific market segments that Intel services better but that exclusivity becomes less every day.

A lot of ultra high-end computing/networking is actually done on FPGAs which can cost in the 10s of thousands each. They make the Intel Xeon 7xxx family look cheap.

If Intel really had a monopoly there wouldn't be $199 netbooks.
 

8350rocks

Distinguished


http://www.amd.com/us/press-releases/Pages/amd-seamicro-sm15000-2013apr11.aspx

AMD’s SeaMicro SM15000 system is the highest-density, most energy-efficient server in the market. In 10 rack units, it links 512 compute cores, 160 gigabits of I/O networking, more than five petabytes of storage with a 1.28 terabyte high-performance supercompute fabric, called Freedom™ Fabric. The SM15000 server eliminates top-of-rack switches, terminal servers, hundreds of cables and thousands of unnecessary components for a more efficient and simple operational environment.



http://www.seamicro.com/SM15000

3.2 KW Average power consumption

You're talking about a Supercomputer measuring in gigaflops...and bragging about 44 KW power consumption for one benchmark...?

Titan runs at 17.59 petaflops...

Green HPC w/Intel Xeons = 112,900 GFlops (Consumed ~45KW or 45,000W)
TITAN w/AMD Opterons = 17,590,000 GFlops (Consumes 8 MW or 8,000,000W)

Now, TITAN is a GPGPU HPC (HSA based), your "green" HPC is not...want to bet the GPGPUs would effect power consumption more than the opterons that run along with them?

 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


If you can wait 6-8 months sure. Richland started shipping in January and it's still not on the shelves.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


A15 is ok but the true fight between ARM and Intel won't happen until the A57 is out. The playing field will be more level when both are 64bit.

 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


The phoronix review was testing a cluster against a chip and they add "While this do-it-yourself ARM cluster configuration is not the most effective setup"

As stated before, there is an European project to build the fastest and efficient supercomputer using ARM chips, because nothing from Intel could compete in performance per watt.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


A worthy objective but you can't be fastest and most efficient at the same time. We're already at the point where you have to do more extreme and costly things to get the performance up. Like using all SSD drives instead of hard drives. Using MRAM cache instead of DRAM.

Look at the work Google has done to make their datacenters more efficient. It had nothing to do with the actual computers but placement of the hot and cold ducts. Automating the ramping of how many AC units need to be on at a time based on the load. Or even deep sea ocean water for heat exchange.

There are many factors beyond CPU choice. If the supercomputer application favors GPGPU is ARM anywhere near as efficient as a NVidia/AMD/Intel compute card?
 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810


Sure fastest, but is it most efficient at the same time ?
What is "efficiency" anyway ? perf/watt , perf/dollar , perf/joule , (perf/joule)* (perf/dollar) ?

ARM HPC will work best in highly, embarrassingly parallel workloads. At that point, its better to use a modern GPU . Because like x86, ARM just a CPU, where perf is dependent on one fat core. Why use one fat core, when you can use 10 thin cores ?
It is better to use a Mali/PowerVR/Adreno GPU, than an ARM CPU. (except these mobile solutions aren't compute friendly yet, and hence why HPC uses desktop GPU's)


because nothing from Intel could compete in performance per watt.

Except the link i just provided above, and that the number of A15 cores in smartphones/tablets is astonishingly small , and that the Xeon-Phi, which is pure x86 is used in the Top Green HPC .
Also, do you mean "Intel" or "x86 in general" by this statement ?
 
Status
Not open for further replies.