AMD CPU speculation... and expert conjecture

Page 292 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

hcl123

Honorable
Mar 18, 2013
425
0
10,780


Conjectures!..

Intel is not doomed in the desktop, only they want *MOAR* for themselves and less for others. The position about mainstream is doing AIO DTs like laptops but mostly sold with Intel brand... you will buy an all intel box not only a processor + mobo, made by them, branded by them, possibly with their own most likely Linux OS endorsed distro along with windoze, with low possibilities of upgrading. The DIY at intel is to fade away for this kind of entry level systems, for performance there will be still the "extreme" CPUs and offerings (more expensive -> more lucrative)... not liking you have to stay away from them or put up with what they have to offer... see it like a strong arm contest about the market... Intel wants to be the Samsung of DT, from FAB to chips, to chipsets etc.. all made by them and sold directly by them.( in a way its a good strategy in a shrinking market, all systems sold with Intel brand CPU is all profit for them... remember this industry is all about *money* not bickering contests)

About clocks and Overclocks... all depends on Fab Processes. If AMD is to transition to an all "bulk" approach then they are caught in the same trend of Intel, they must focus on low power, perhaps do their AIO things like intel... and the BD design loses some of his advantage, don't expect to see any chip on "bulk" above the 4Ghz, or at least not much above.

We don't know any of this yet, except Kaveri first SKUs this year are made of 28nm GloFo bulk... but if the positioning follows past launches, those are to be "mobile chips" (low power anyway). In conjectures about Fab, one thing that surprised me was IBM 22nm PD-SOI ... it must be PD-SOI for that eDRAM... which opens the possibility that the 28SHP that appeared in some GloFo charts is not dead, and is PD-SOI. Better, since the 28nm FD-SOI is to be half node shrink, its possible the same litho front-end be applied to 28nm PD-SOI, its all "planar" techs, for PD-SOI AMD has already good expertize and most of the techs are already developed, meaning its possible ~40% shrink compared with 32nm (a PD FX 8350 at ~200mm² ) and better power overall.

Another development about Fab, which was rumored GloFo was concentrated on, and which is one of the main advantage of IBM to make >600mm² chips at 4Ghz, is the interconnect. IBM as by far the best Ultra lowK interconnect in the market with a merit value of 2. Current elsewhere is 2.5 or more, if AMD and GloFo could arrange a 2.2, this would open very good possibilities for high clock designs with lower power, since 30% or more of the power of a chip is wasted in the interconnect.

Putting all together 28nm PD-SOI with a ~40% shrink, and UltraLowK interconnects, AMD could have the speeds of Centurion on normal DT offerings... quite above 4Ghx... Centurion could go to 6Ghz or above on Turbo, depending if the SR design is tweaked for more clock with more pipeline stages or not. Besides it would enable to push further those big caches, and probably a chip for DT/Server will have 5 modules, or 4 modules but with 2 FlexFPUs in them, yet be below the 300mm² and clock easily above the 4Ghz staying under 125W.

Under those conditions, AMD will have the pure performance crown (they already do with Centurion http://www.xtremesystems.org/forums/showthread.php?286815-AMD-FX-9590-in-FlanK3rs-hands-and-in-review ).

So matter of fact they don't even need SR, PD tweaked and new FAB process would be enough... perhaps that is why Warsaw is PD, UltralowK and a few power oriented tweaks, like Cyclos RCM applied to all this *server* chip (RCM is not a good choice for parts intended to OC, which is not the case of server), is enough to get it with quite better perf/W than now, yet maintain the same clocks of now if not raise them a little... its all conjectures, but i tend to believe something in those lines, Warsaw is Orochi server re-done in a power tweaked mode, AM3+ is EOL, but there will be a Steamroller FX/Server with less HTT links, less RAS features, 5 modules 10 cores to battle IB/Hasfail-E, OC prone, more about DT and 1P server in a new socket, by the 2H of 2014.. and if the conjecture is any good, and if not much to ask, it will have the possibility of "combo HTX+PCIe" slots with HTT4.0, ready for hUMA/HSA with discrete parts.(edt)

IF AMD transition to an all bulk approach and clocks comes down to have noticeable better perf/W... then BD design is gonna start to look not that good (scalability potential aside where it kicks all kind of arse)... and their market share is going to shrink fast .. But Kaveri OTOH, the best for it, even for DT is FD-SOI... since FD-SOI seems clearly the best option for a GPU also. IF Hawaii would be FD-SOI (i KNOW its not) (matter of fact less expensive to fab than bulk) Nvidia dear leader would have an hearth attack lol... 1500Mhz in a GPU of that size and power would be easy...

The transition to SR MCM server parts only in 2015.



 

8350rocks

Distinguished


According to recent studies:

High Performance Desktop is on the rise...

People are paying more for more performance, and that segment of desktop is expanding. It is driven almost entirely by PC Gamers.

In fact, it is this reason that Intel is not killing off the Extreme series, even though it's a low volume sell, they intend to eventually shift DIY PC builders to *that* platform for anything other than a weak SoC. The margins are higher, and the price gouging much greater on the extreme series stuff.

If AMD abandons that expanding market, then they're far more foolish than I would have ever guessed. They could easily win it back with a good FX steamroller with 8 cores performing on par with a 4770k. Hell, if they even got close to Ivy Bridge in single thread capability, people would buy it for the multitasking and threaded workloads.

In order to compete in that segment AMD needs to do just 2 things well:

1.) Get performance in the same ballpark. It doesn't have to be better, but close to IB single thread performance would be all over it. Especially considering the lame duck that is hasfail.

2.) They would need to be price competitive there...(I foresee this being easily done) as long as you could buy an AMD product for a bit less than an Intel product with similar performance, they would sell lots of them.

ARM will not replace high end x86 DT for gaming and other x86 workloads. Not likely in my lifetime anyway.

ARM for Micro Servers? Sure, you have software support.

ARM for HE DT? No...Crysis 3 will not play on an A57.

/end to silliness about ARM overtaking x86.

 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860


HAHAHHAHAHAHHAHHAA!!!!!

58179.png
58180.png


Lets see how great this "bay trail" is since someone believe its a decent processor. Not even remotely close to an IVY bridge cpu, but turbo lets it beat the 1.5ghz jaguar (but not the 2.0 ghz)

And Intel Bayl Trail is a very good chip.

58070.png


Ya ... someone lied to you.

again :"The IPC of the A7 is on par with Ivy Bridge parts! "

HAHAHAHA!!!!HH!HAAA

So what about the state of Android x86 development?

Depending on where you were in the Android UI, there was some definite stutter, but I’m told this is a result of an issue with Dalvik not allocating threads to cores properly that Intel is still tuning,

ya, apparently its unfinished so comparing android x86 to ARM isn't straight across yet.

But arm finally getting close to the Intel Atom puts it just above VIA chips and no where near true x86 performance.

And of course as with all ARM cpu review, no power numbers. We are just to assume that its massively more efficient.

P.S. TOMS QUIT SCREWING WITH THE FORUMS, edit screen is bunked again, worse than before.
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780
Thank you 8350rocks, it seems I was misunderstood but you got my back.

6 core and 8 core is about 2.6% of all gamers on Steam Hardware Survey.

AMD's GPUs are their most profitable business.

7970 makes up .7% of the market on Steam Hardware Survey
7950 makes up 1.1% of the market on Steam Hardware Survey.

That is less than 6 core and 8 core marketshare combined. Yet AMD makes money off of 1.8% of the market with Tahiti GPU and they are not abandoning that market. In fact, AMD is making Hawaii an even bigger die.

If AMD could maintain die to profit ratio of GPUs with CPUs their CPU business would be just fine. 4m/8c SR FX at ~300mm^2 that performed well enough to be sold at $399 or so and disabled module chips going for $249 or $299 would let AMD hit good price points per die size and they should make decent profits off of those CPUs.

High end desktop is doing well with sales. The data gets fuzzy because it's hard to find data in regards to who is building computers and who is buying Dell or Alienware gaming desktops. Looking at Dell, Lenovo, etc desktop shipments and then going "DT is DYING" is implying that DT sales are down because no one wants to buy DT as opposed to people building DTs for people.

I do not know a single person who buys pre-built desktops anymore. I build DTs for everyone who needs one. Unlike in the past where Dell would offer good deals for buying in bulk so it'd be the same price or cheaper than building yourself, that isn't the case anymore. If you buy a pre-built gaming rig that's not high end, you'll end up with a 4770k or 4670k with a mid-range GPU, like GT650. Better gaming rig is lower end CPU with 7950 or Nvidia equivalent. Nearly everyone knows this who can read a bar graph and looks up hardware reviews.

Also, people are not skeptical enough about the data we get in regards to desktop sales. Intel wants to sell disposable devices that are obsolete in a year. They really, really want into the tablet and phone market, as well as laptops. Those are all toys which you can not upgrade and must be thrown out to "upgrade". Compare with Intel gaming desktop. That can be upgraded and even if you're on Nehalem or something, your upgrade is more than likely going to be a new graphics card, which Intel gets no money from. Instead with Laptop, mobile, etc, Intel then gets money for selling a new CPU when someone's laptop GPU is not strong enough.

It is quite simple. The CEOs of these companies are not stupid. They have been selling upgradeable, serviceable products for years and Apple shows up, sells un-upgradeable junk which goes into the trash after a year or two, and Apple stock and company value crushes Intel, AMD, IBM, basically every company that makes serious products that people like.

Disposable products are where the money is. Why do you think the new Mac Pro is the way it is? You can't really upgrade it, it's meant to be bought, tossed, and replaced instead of upgraded. And that's how Intel, Apple, etc make more money.

This is what Intel wants and they will do what they can to make things look like they're going that way. I feel that some of you are under-estimating the sway Intel has over consumer's opinions. This is the company that has made TDP one of the most important factors to some people regarding high end gaming desktops. If they can get people to think that 100w of power usage and 55w of extra heat is a big enough factor to make a product horrible, I see no reason why they can't influence popular opinion into thinking all DT is dying to help further their push for disposable junk products which results in higher profits.
 

hcl123

Honorable
Mar 18, 2013
425
0
10,780
Interesting... what is the "system" of that silvermonts ?

The Apple ones is for sure a Smarthphone... is there any Smarthphone or Tablets with BailTrail already ?

[EDIT: BTW is there a Android or a full iOs Aarch64bit system yet ?... if anything 32bit (subsystem etc) on A57, it will run considerably slower since its kind of "emulated" -> the ARM v8 is a totally new ISA ]

Comparing a mini AIO system with a smarthphone or a Jaguar tablet is more like oranges to apples lol

No need more rants... the usual suspect, the usual craps... tough very good evolution that bailtrail, yet the most relevant is that a A57 (tested in a phone format and constraints) is quite more performant than a Jaguar Tablet, and so more than probably more performant than BailTrail to ... just wait for Qualcomm and Samsung to have a say...

Not to mention STMicro with its FD-SOI things, which on A9 is 2.6Ghz for a quad on phone format an constraints... a tweaked A57 quad ("supposedly" the A57 is quite less power for the same performance of A15) for mini AIO like would be 3Ghz or above (compare with 1.49Ghz lol) for those around 2-3W... more than triple the performance of 2.6Ghz A9... no not exaggerating, only we don't know any plans of STMicro, or any intentions of others to have FD-SOI... perhaps Qualcomm could have it... because one thing is clear, the 28nm (TSMC ?, Samsung ?) "planar bulk", of Apple is showing its age for the 22nm Finfet of intel... FD-SOI could more than revert that... and also the FP power and Mali GPU is too small and constraint for Octane or Firefox multimedia features to shine (tegra 5 on FD-SOI would kirk all kind of bail arse)

But good to see more *good* competition... now the only thing that misses is Aarch64 on DT, for Intel to have full company, since in server they are already *very* worried... that is why the 22nm silvermont.
 

8350rocks

Distinguished


+1 Intel is all for doing the consumer a disservice.
 

hcl123

Honorable
Mar 18, 2013
425
0
10,780


Arguably that 2 points AMD already does as is ( at least in my corner i can buy a FX8350 or 20 cheaper than any i5 and the performance is quite better) . If they get the single thread performance on top of IB or HSW (which is not even 10% better), meaning not much, another 15 to 20% better, and maintain or improve the clock performance advantage of SOI that could be >30% for the power point of a performance DT (which is ~125/130W-> intel agrees with me, just look the power of their 6s), then a beating is at site, more severe if we consider multitasking and multithreading (which are clearly the much more important features).

No the more important of all is exactly HSA... that in extremes could have 10x (1000%) or more better performance than a pure CPU solution... this more important than uarch, and than FAB process... the killer feature for an high Performance DT of AMD would be exactly to have combo HTX+PCIe slots, ready for discrete with full hUMA/HSA capabilities.




Do you wanna bet ? lol

At least looking at the ARM v8 ISA, its technical features, it seems they are eying that possibility exactly... DT... more than server which misses HTM (hardware transactional memory), perhaps a feature to live at the discretion of Uarch licensees. Another feature missing that could be addressed in a very small upgrade is better "vectors", 256bit in packed format will be more than enough for a CPU...

About games and multimedia etc... i also think ARM is thinking differently from you... there could be games for HSA, that is, "compute" centered, which means there could be games for ARM, since that "compute" orientation is exactly the trend and ARM is HSA also.

If it will ever take x86 is another story... it depends on how fast and how extent that software ecosystem grows(edt)... but since a long time and considering all the points that go beyond design and CPU uarch, its the first time since long i see a good possibility of that happening in a medium to long term (replace x86 as a dominant ISA).... which doesn't mean x86 will disappear or cease to be a good solution for many markets, that is the problem with internet fanboism and like, is always all or nothing, x86 wont disappear for the foreseeable future, even if it stops being the dominant ISA.

 

hcl123

Honorable
Mar 18, 2013
425
0
10,780


+1

he he he he !... careful you might yet end up CEO of one of those companies.

Yet i don't think it will result as Intel intents for the DT... the easily gullible dumb masses are more about mobile in any case... in DT they would find better informed much harder to twist masses.(edt)

Yet ithink they know this, and it doesn't invalidate the merit of the un-upgradable junk, *enterprise* is a good escape route, those buy lots of over-priced junk to start with, as long as the contracts include guaranties with support services (edt), since their paradigma is clearly client/server centered... and those also appreciate more "brand" over technical analyses, that is why i think "we" will have Intel AIO boxes, it facilitates contracts it facilitates support, and enterprise rarely upgrades, much more often they replace (easier since its a client/server environment).

 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


The CPU of the Kaveri APU is at the i5 level. That is high-performance level. And the HSA APU, as a whole, is above the i7 level. That is also high-performance.

I doubt Intel makes lots of money from the eXtreme series, it is more a marketing tactic as the Titan for Nvidia. If AMD releases an enthusiast CPU line (I doubt), it will be priced at the eXtreme series level. Regarding graphics cards AMD already said that its next gen (just to be presented) will be not focusing the high-end top level market, but one level below that.

As you know, AMD won the consoles contract for Sony and Microsoft. Do you know that MIPS and PowerPC were soon eliminated from the race, but ARM was in close competition with x86?

Then it came down to ARM versus X86 architecture. I am told there was a technical “bake-off”, where prototype silicon was tested against each other across a myriad of application-based and synthetic benchmarks. At the end of the bake-off, ARM was deemed as not having the right kind of horsepower and that its 64-bit architecture wasn’t ready soon enough. 64-bit was important as it maximized memory addressability, and the next gen console needed to run multiple apps, operating systems and hypervisors. ARM-based architectures will soon get as powerful as AMD’s Jaguar cores, but not when Sony or Microsoft needed them for their new consoles.

That future is here. ARM64 is ready and the A57 core provides the same performance than jaguar core but with increased efficiency. That is why AMD is replacing their own jaguar servers by the new ARM servers.

Moreover, custom ARM cores will be much faster than A57. Game developers as J. Carmack have already shown their public support for the Nvidia project Denver. You can read about the project here. It is aimed to replacing x86 in desktops, servers, and supercomputers.

And many in the game industry are convinced that the next gen consoles (PS5...) will be all them ARM based.

Here some recent PC sales data

http://www.upi.com/Science_News/Technology/2013/04/10/PC-sales-in-free-fall/UPI-82731365637852/

Here AMD stating again that their focus is mobile and game consoles

http://www.thehindubusinessline.com/industry-and-economy/info-tech/amd-focuses-on-mobile-devices-gaming-markets/article4978016.ece



LOL You are very seriously misguided.

First, desktops will be not using phone-level rated ARMs, but desktop-level rated ARMs. Apple will be not using a 1W dual-core 1.6 Ghz phone chip in a desktop.

Do you understand the difference between a Temash chip and a FX-9590? Your hahaha approach is so misguided like if you did look to raw performance of a Temash chip (a tablet-level rated chip) and concluded that x86 architecture is not powerful enough for a high-end PC and that no PC will be using a 220W FX-9590.

Second, the article from where you got some of the figures also says:

In multithreaded integer workloads, the Z3770 gets dangerously close to Ivy Bridge levels of performance.

"Dangerously close" is the correct term because this is a phone-level rated chip. This means that you can have about the 40%-80% of the performance of a modern laptop in a........... smartphone.

Third, the Z3770 only consumes 1W-2.5W in those tests. Yes this is not a typo, it is single digit power consumption, whereas the Ivy Bridge chips consumed about 9x more. This means the new Silvermont design is about 4x-8x more efficient than Ivy Bridge.

Some say "wow" you say hahahaha
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


32-bit software still runs about a 30% than in the previous A15. Of course, the real improvement in performance is obtained with native 64-bit software.



ARM is praising FD-SOI. I think that some 3GHz ARM chips have been tested. A hypothetical quad-core A57 @3.5GHz would offer about the same performance than 4 steamroller cores @4GHz. Nvidia has promised its custom cores are faster than a A57. There are rumours that its chip is rated @2.6Ghz.
 

8350rocks

Distinguished


You are missing something with this rant about ARM.

ARM is RISC, x86 is CISC.

PowerPC is RISC, and the only reason Apple abandoned the PowerPC CPU architecture was because it was RISC and could not perform many of the things that they wanted their OS and software to do. Remember when a slower clocked Mac would blow a much faster Windows machine out of the water? In those days, they ran PowerPC architecture...not x86. The last Mac OS from Apple that supports PowerPC is OSX Leopard, IIRC (roughly 6-8 years old now).

ARM may be potentially faster at what it can do...however, the x86 Jaguar cores, and particularly the upcoming puma cores, are capable of doing more because they are Complete Instruction Set Chips...where as ARM is a Reduced Instruction Set Chip.

That's partly why ARM software has not taken off like a rocket...otherwise, if ARM was faster, more efficient, and better flat out across the board...there would have already been widespread adoption of it a long time ago. However, x86 has superior FMAC performance, and does many things better than ARM (also does things ARM cannot yet do). That doesn't mean that there cannot be instruction sets added to ARM to get it there, but that will take time. Plus, as we see in x86, once instruction sets come out, it takes a few years to gain widespread adoption of such instruction sets in software with a development community as massive as x86 has. ARM has probably 1/100th the support x86 has for commercially available software to public end users.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Sorry but it is not a RISC vs CISC thing. If was only that, we would be also considering MIPS, SPARC, PowerPC,...

Apple didn't abandon PowerPC because was RISC, but because of disagreement with the development of the chip. Although someone from IBM claims that is was for the money.

http://news.cnet.com/8301-13924_3-10264290-64.html

Similar history about consoles. It isn't RISC vs CISC. Sony and Microsoft abandoned PowerPC and neglected MIPS but considered seriously ARM until last minute. Finally ARM didn't pass the cut, but was close. Today ARM had passed the cut.

As I already said to you, and AMD also did... A57 offers the same raw performance than jaguar. This performance is measured in real workloads (e.g. how many seconds it takes to compute this or that). it is not measured in how many low-level instructions one or other design run to finish the task. That is why ARM has been selected for building an exascale supercomputer, whereas x86 was rejected.

ARM has not taken the world, because it was only 32 bits and only designed for phones/tablets and similar devices (printers, modems...). And guess what? ARM has taken that part of the world almost entirely.

ARM spent so many years to design the 64 bit, because they didn't simply took the easy way (AMD/Intel way) of offering a 64-bit wide version of the 32 bit architecture. No. They redesigned the architecture from top to bottom, cleaning up the entire design and optimizing still more what was already optimal. The new ARM64 is beyond x86_64. That is why Intel best attempt (Silvermont) has been humiliated by Apple in five minutes.

The new A57 core is the first ARM core specifically designed for servers and guess what? it is taking servers. The design is so good that AMD is replacing its own jaguar based servers by ARM servers. AMD claims that ARM will win over x86:

why-arm-will-win.jpg


And Nvidia claims ARM will take desktops, servers, and supercomputers:

http://blogs.nvidia.com/blog/2011/01/05/project-denver-processor-to-usher-in-new-era-of-computing/

I add: "and next gen consoles". I promise you that Apple will abandon x86 for ARM.

I don't know what do you mean by ARM software, but there is no problem with porting existent desktop software from x86 to ARM

http://en.wikipedia.org/wiki/Debian#Stable_ports
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
For all those who still believe that ARM is something about phones I believe that Nvidia blog is a good place to start:

NVIDIA's project Denver will usher in a new era for computing by extending the performance range of the ARM instruction-set architecture, enabling the ARM architecture to cover a larger portion of the computing space. Coupled with an NVIDIA GPU, it will provide the heterogeneous computing platform of the future by combining a standard architecture with awesome performance and energy efficiency.

ARM is already the standard architecture for mobile devices. Project Denver extends the range of ARM systems upward to PCs, data center servers, and supercomputers. ARM’s modern architecture, open business model, and vibrant eco-system have led to its pervasiveness in cell phones, tablets, and other embedded devices. Denver is the catalyst that will enable these same factors to propel ARM to become pervasive in higher-end systems.

Denver frees PCs, workstations and servers from the hegemony and inefficiency of the x86 architecture. For several years, makers of high-end computing platforms have had no choice about instruction-set architecture. The only option was the x86 instruction set with variable-length instructions, a small register set, and other features that interfered with modern compiler optimizations, required a larger area for instruction decoding, and substantially reduced energy efficiency.

Denver provides a choice. System builders can now choose a high-performance processor based on a RISC instruction set with modern features such as fixed-width instructions, predication, and a large general register file. These features enable advanced compiler techniques and simplify implementation, ultimately leading to higher performance and a more energy-efficient processor.

Whereas I like AMD Seattle and HieroFalcon chips. I am anxiously waiting for a HSA ARM APU (with custom cores).
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780


That exactly misses the point I'm trying to make.

If Gaming PCs make up 25% of all those rigs, and they increased by 10% (to 27.5%), and the rest of the 75% of DT dropped by 30%, it would still make DT numbers look worse even though gaming DT numbers are growing. It is kind of a BS propaganda number, like as I said before, could be used to convince people that gaming PCs and such are on the decline.

This agenda does not surprise me as Intel wants to kill overclocking, dGPUs, gimps "unlocked" chips by disabling features, places what they call the enthusiast platform's cost way above what most gaming PC people spend, etc.

As for ARM moving into DT market, uhhhh why does ARM and ARM CPU vendors want into DT market if it's dying like Intel and stuff keep saying? Nvidia wants to rush into the dying market? So they can play Crysis 3 and BF4 on their ARM CPUs?

ARM CPU has a massive hurdle to overcome, even if it is faster it's going to isolate a lot of folks because it won't have software for it. If it is just as fast as an i5, who cares if it can't run Photoshop?

And what OS will they put on it? Linux? Windows RT? There are major hurdles. I can see it doing very well in servers powered by Linux, and I'm sure MS will want to get in on it too, but for consumers we will still need x86. And trust, me, I think x86 is an abomination. I did x86, SPARC, and MIPS assembly in college and we started with x86. After going to SPARC I was wondering what the hell anyone was thinking making x86. It felt like a gigantic mess of just throwing stuff on top of more stuff on top of more stuff.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810
ARMv8 does move ARM up into the server arena but they still have a ways to go. Anands review of the A7/iPhone5s shed a lot of light on the architectures potential performance. The A7 is just a dual core chip though and at over 1 billion transistors it is close to the i7 Sandy bridge in transistor count. The ARM cores plus graphics aren't tiny anymore. And it does make you wonder why they need so many transistors when x86 quad cores from Intel/AMD are in the same 1billion transistor ballpark.

The more competition ARM provides the better. It will force Intel to start charging more reasonable prices.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


Of course anyone making ARM chips will say that. When you see IBM and Oracle abandoning their respective supercomputer chips I might believe it. Meanwhile they're both launching massive upgrades in their respective product lines.

The IBM Power8 is a 96 thread beast. 12 cores, 8 threads each.
The Oracle Sparc T5 is a 128 thread giant. 16 cores, 8 threads each

ARM may do well running basic web servers but not much work (pain sweat n tears) has been done to scale them yet. Wishful thinking is one thing. Making it happen can take another decade. They're really just BBOSNS (Big Box Of Shared Nothing Servers). There's a lot more to supercomputers than just the CPUs.

We'll see what happens to the power efficiency of ARM when they have to add much faster interconnects, bigger caches, quad channel memory and higher clock speeds.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


Is Mont Blanc the project you're referring to? They just posted an update this week.

http://www.montblanc-project.eu/sites/default/files/sites/default/files/press-releases/The%20Mont-Blanc%20approach%20towards%20Exascale-EuroMPI2013.pdf

They had some interesting results. You can cluster a bunch of cell phones, but they conceded that Intel is still more power efficient at higher clock speeds.

"Intel still more energy efficient at highest performance"
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


With just a handful of benchmarks it's much too early to make those kinds of comparisons.

I ran the Kraken one on my PC and it only loaded my CPU 25%. Scores ranged from 1305ms with Chrome, to 1405ms with FireFox to 4263ms with IE10. That's just as much a browser test as it is a CPU test.
 
@blackkstar

It's actually much worse then that. CPU's need motherboards and other circuitry. So what chipset do people use with their i3/i5/i7 CPU? What network controller tends to be on those boards?

Intel makes money two ~ three times over, once on the CPU, then again on the chipset and the network controller. Hence replacing the AIO setup would have you paying for all that over again.
 

griptwister

Distinguished
Oct 7, 2012
1,437
0
19,460


+1 to the both of you.

Look on the bright side though... 6 more days till we find out what AMD is planning exactly with these new GPUs!
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860
@juan

AGAIN LOL,

In multithreaded integer workloads, the Z3770 gets dangerously close to Ivy Bridge levels of performance. Again, we're overstating Bay Trail's performance here as the Z3770 has four cores while the Core i5-3317U only has two (but with Hyper Threading presenting another 2 virtual cores). I don't believe most tablet workloads are heavily threaded integer workloads, however the world is hardly single threaded anymore. The reality is that a quad-core Bay Trail should perform somewhere between 40% - 80% of a dual-core Ivy Bridge

wow, thats impressive you only took the first sentence and tried to defend yourself. 4 cores to equal 40-80% of a dual core Ivy bridge .... you fail miserably.

58111.png


Thats funny, looking at that same test, AMD >>> Ivy bridge

you can continue to anxiously wait for your mediocre performance - non compliant products all you want.

I can also see you bought intel's new SDP gimmick for cpu power ... 2 watts SDP != 2W TDP
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I don't have any data that shows that Gaming PCs (I mean gaming oriented desktops) are increasing whereas the rest of desktop drops. Do you? At the best, I can concede you that gaming on a pc has been raising and will continue to rise for next years. But most of that gaming on a pc is made in ordinary laptops, not in a dedicated desktop gaming, not even in dedicated gaming laptops.

ARM is not focusing in a dying market such as the DT, because makes no sense. ARM is moving to servers and HPCs. But once that you have a chip to run in a high-performance server or a supercomputer, you can use that chip for the desktop as well. It is a welcomed bonus.

Regarding Crysis 3 BF4, as I already said ARM was close to be inside the current PS4/XboxOne. I am convinced that next gen consoles will be ARM.

Once Apple was switching to ARM, all its software/ecosystem will switch as well. GIMP already runs on ARM. Photoshop could run as well if there is market.

Windows is dying as well. AMD did break exclusivity with Microsoft, and this week, Intel announced the same. Intel is joining with Google now. The only player who believes on RT is Nvidia and I believe that they are making a huge mistake. The future is linux. Gaming will be made under linux. Valve is about to present their gaming PC/console and it will use linux:

http://techcrunch.com/2013/09/16/valve-ceo-gabe-newell-says-linux-is-the-future-of-gaming-hints-at-steambox-announcement/

Yes ARM assembly is simpler and elegant than x86.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


The important thing is not who, but _what_ is saying. IBM and Oracle are updating their products to compete, but both will fail. Being replaced by another arch/chipmaker doesn't mean that IBM and Oracle will abandon its own chips. Look at Apple abandoning IBM. IBM has continued with its PowerPC line. Look at Sony/microsoft abandoning IBM for consoles. IBM continues with its PowerPC line.

IBM once had the crowd of HPC market but no more. They lose both the top500 and the green500 crowd. IBM continues with its PowerPC line. However, IBM has just joined with Nvidia to develop heterogeneous products, because cannot compete with CPUs alone (despite the fact that the Power8 is an excellent chip).



Excellent work of selective quoting and whole misunderstanding. You are quoting from slide 15 on "multicore performance". The phrase immediately above says "ARM multicores as efficient as Intel at the same frequency". They are comparing the old phone-level Tegra 2 and Tegra 3 chips to an i7. The Tegra chips are thermally constrained to phone-level power consumption. The i7 is not thermally constrained.

Look the figures in the same slide 15. Up to ~1.3 GHz the phone Tegra 3 offers the same efficiency than the i7. But the phone-level chip doesn't run at higher freqs (it is not designed for that, it s a phone chip) and the i7 continues scaling to higher frequencies (it is not thermally constrained to single digit Watts as the phone chips). At about 2.5Ghz, the i7 has a small efficiency advantage of 0.1 over the Tegra 3.

Mont Blanc are using Tegra 2/3 in the prototype. But Parker is the chip that will be used in the final supercomputer

TegraRoadmap%20copy_678x452.png


Now return to the Mont Blanc presentation and pay attention to the slide 14 on "Single Core performance". The Tegra chips were more efficient than Intel in single core performance: "ARM platforms more energy-efficient than Intel platform".

Now pay attention to this, from the same slide 14, "Intel Core i7 is ~3x better than ARM Cortex-A15 at maximum frequency". Pay attention to the figure at the left in the same slide. The intel i7 is ~7x faster than Tegra 2. Now pay attention to the Tegra roadmap again, Parker is about 100x faster than Tegra 2.

The final Mont Blanc supercomputer will use Parker or something better. If you look at the goals of the project in the slide 5. They plan to offer a supercomputer that was ~6x faster than the top supercomputer today (Xeon + Xeon Phi) but consuming only a ~50% of the power.
 
Status
Not open for further replies.