AMD CPU speculation... and expert conjecture

Page 315 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

blackkstar

Honorable
Sep 30, 2012
468
0
10,780


FX 9370 is going to retail at under $270. 3930k goes for $550.

3930k is literally twice as expensive yet it ties with a stock chip.

Also, you're ignoring my point of SR being 20% faster per clock and competing extremely well with 3930k.

Lets do some math!

FX 8350 at 4ghz gets average 60fps.
SR with 20% performance increase at 4ghz would get 72 fps
SR with 25% clock speed increase to 5ghz would get 90fps
2600k at 5ghz would get 88fps.

3770k and 4770k can rarely get close to 5ghz and the IPC more than likely wouldn't make up for the clock speed hit.

Of course, you're forgetting that FX 9370 is supposed to come in at under $270, making:

3930k 111% more expensive
3770k 18.5% more expensive
4770k 26% more expensive

Assuming you buy an FX 8320 and OC it to 4.8ghz
3930k 256% more expensive
3770k 100% more expensive
4770k 112% more expensive.

Also, you're implying that everyone will overclock. AMD shipped high TDP stock parts that most people hit when overclocking. It's still a stock part.

If Intel wants to release chips with higher TDPs in the 4ghz range then they could, but they never will.

EDIT: Also, this graph is misleading. The AMDs frequencies for 9000 series parts are marked with their turbo frequencies while the Intels are marked with base frequencies.

FX 9370 is actually 4.4ghz base clock which more lines up with the scores we're seeing being about 10% faster as opposed to almost 20% faster like is implied from the 4.0ghz and 4.7ghz frequencies for 8350 and 9370 respectively.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860


Since you can't do a simple task, ill do it for you.

http://www.phoronix.com/scan.php?page=article&item=phoronix_effimass_cluster&num=1

Last week I shared my plans to build a low-cost, 12-core, 30-watt ARMv7 cluster running Ubuntu Linux. The ARM cluster that is built around the PandaBoard ES development boards is now online and producing results... Quite surprising results actually for a low-power Cortex-A9 compute cluster. Results include performance-per-Watt comparisons to Intel Atom and Ivy Bridge processors along with AMD's Fusion APU.

page 11:

The average power consumption for the Ivy Bridge system was 107 Watts under this load, which worked out to 2.58 Mop/s per Watt. The Ivy Bridge system for this workload was even more efficient than the PandaBoard ES at 1.78 Mop/s per Watt. The Ivy Bridge system was even with an SSD and other attached components requiring additional power than the ARM setup.

page 12

The efficiency was at 85 Mop/s per Watt compared to the Effimaß cluster at 30.79 Mop/s per Watt.

so an Ivy bridge DT cpu is 44% to 180% MORE efficient than a 12core ARM cortex a9. This isn't a high efficiency server cpu. The AMD comparison was kinda old. 40nm bobcat used and not a temash/kabini.

"oh what about the A15"

embed.php


well, at times the speed of an A15 is only 27% of the I3 330M and at best 67% on scimark ... remind me again what gen the 330M is? and the power test ... not actually done as usual, just assumptions.

The Core i3 330M has a 35 Watt TDP while the Exynos 5 Dual operates within a few Watt envelope. Unfortunately due to the varying displays and other hardware differences, an easy power consumption / performance-per-Watt comparison couldn't be done for this article.

this is of course assuming the 330M runs at 35W all the time (if it even reaches 35W) and not the 3380M wich is also 35w @ 2.9ghz instead of 2.13, wich would have blown the cortex A15 away in terms of raw performance.
 

8350rocks

Distinguished


First, I have never insulted you, nor have I ranted.

Second, sure, A7/A9/A15...we have those numbers, and DT CPUs blow them away. It's not even close, when I played baseball in college, we called that a "boat race".

I am talking about your theoretical performance projections about "the A57 will be <insert crazy claim with no proof here>"...so is noob and several others.

By the way, if the A57 is anything like the aforementioned ARM CPUs in performance and power consumption, then I hope you have a good recipe for crow. Because you will surely be eating a heaping helping of it when the dust has settled if the A57 isn't something like 200%+ greater performance at similar tasks...

That was my point all along...though you seem to gloss over it and resort to name calling. Stop assuming that insults will fly because you talk in such a condescending manner. You talk to everyone who doesn't feel that ARM will rule the world in the next 6 months like they're of such low intelligence...it's borderline insulting in many instances.

EDIT: Just a note, but if you actually read the phoronix review of 12 ARM cores against Intel and AMD architectures. A dual core E-350 embedded AMD, is about on par with 5 ARM cores.

The ARM cores were clock at 1.2 GHz and the E-350 at 1.5 GHz.

Sure, the ARM cores took less power...however, the E-350 still beat 4 of them in performance.

The 3770k even beats 12 ARM cores in the parallel benchmarks no less, and there were 12 ARM cores!

As I said earlier, ARM will not beat x86, it's just a nice diversion into mobile territory. You ranting about ARM is no different than "he who shall not be named"* ranting about PPW.

I dare not mention it, as he would surely show up...
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810
The truth is..... You Can't Handle The Truth!

No new SR news makes for a boring forum.

All this ARM vs X86 stuff is fun speculation and all but how long will it be before you can run BF4/Crysis3 on an ARM platform? First they have to catch up hardware wise, then the software developers have to be convinced to switch over. Maybe 5 years by the time the next consoles are being designed.

Lots of things will be happening in that time frame. 450mm wafers, 7nm process tech (Intel), large on-package memories (HBM/HMC).

What will ARM desktops look like? Chromebooks/laptops? Actual desktops? Tablets with docked modes to enable higher wattage performance? Will Microsoft continue Win RT after the abysmal sales or will an Android/Ubuntu desktop OS emerge?
 
i have a speculation on SR, but i have a bit of a hard time believing it myself. it's full of 'out there' assumptions.

i think amd made another short-sighted decision sticking with a desktop/server oriented cpu uarch with bd and derivatives instead of launching bobcat in 2009-2010. i remember an old gpu roadmap from s/a showing 32nm gpus (possibly gcn 1.0). i consider bulldozer(zambezi) to be essentially 40-45nm cpus as they behave like them.. more or less. same way i think gcn 1.0 may be 32nm gpu uarch, more desktop oriented than laptop and much less ultramobile-friendly. i am not saying amd failed to make mobile apus and gpu out of bd(derivative - pd) and gcn, but it looks more and more like the uarches were optimized for larger nodes and were not properly tuned for smaller nodes when they became available, may be due to lack of time, funds and resources. i attribute part of gcn's higher power use to lack of tuning. another part is how tsmc had to resort to half node shrinks. 22nm woulda been better for kabini and sea islands gpus. this will carry on till 14nm, if amd's x86 survives till then, we might have the apus/socs/gpus the way (i assumed) they were meant to be made.

as for arm, the new benchmark engineerings that came to light recently, were quite amusing. it was a big 'suck it' to people who naively thought that all arm vendors play nice with each other while innovating and advancing technology ending big bad intel's evil monopoly. truth is - they're businesses. amd may end up facing multiple cheating competitors instead of just one. scratch that, amd will face cheaters. one interesting thing (among numerous others) was that all the cheating devices had socs from uarch licensees (and cheating veteran++(!!) intel :p). i assume core licensees are less capable of implementing benchmark optimizations. if true, it might make them (core licensees) appear more honest regardless of their intentions, albeit helpless against experienced cheaters. arm ecosystem is evolving like x86, but at a faster rate, since intel can't get in the way. that means bad things will also happen much earlier and on a much bigger and deeper scale.

now i wonder how amd obtained their 'superior' seattle performance numbers...... :ange:
 

Ags1

Honorable
Apr 26, 2012
255
0
10,790
Just speculating, but I think the PD APUs might be equivalent to i5 (mobile, 2C/4T) - a bit behind on single-thread, but a bit ahead on multithread. The APUs are already good for multithreaded work, and any improvements to single threaded performance will only boost the multithreaded performance too.

I think Iris Pro must have made AMD rethink a lot of things - they no longer have the fastest integrated mobile graphics, although with Iris Pro systems costing over 1000 euros Intel has only won a paper victory. I think this might have pushed back the PD announcements as AMD had to protect their graphics advantage by reworking some stuff in that area. It's good though that Intel is finally competing on the graphics front - I don't remember my GMA950 laptop with much love.
 

8350rocks

Distinguished


Hmm...so far as I know, Iris Pro is still below AMD's top of the line APU graphics. Unless something's changed in the last few months...?
 

8350rocks

Distinguished


2 things:

1.) Win 3.1 was basically DOS with a GUI, and a poor one at that. Win95 was in the 586 days...code abstraction has reached entirely new heights many times since Win95.

2.) The ARM version of Linux runs well on ARM...however, it's not nearly the same snappy OS it is on x86. As a matter of fact, Linux on PowerPC is really snappy compared to ARM as well.

Maybe AMD could talk to IBM about using PowerPC architecture to try to unseat x86 too...? Oh, nevermind!!! Apple tried that with Macs for about 10-15 years...
 

8350rocks

Distinguished


Oh! You're talking about the $600 BGA jobs that require a $250-300 MB and aren't available to the general public...

Well, even if they somehow managed to be as good as Richland APUs on the GPUs, I am not at all concerned, though frankly...I would be amazed if there was really a 20% gap between the low and high end of their iGPUs.
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780


Yeah, and Gentoo with KDE 4 runs extremely well on my Core 2 Duo at 1.66ghz with Mobility x1600 graphics. I don't see the point.

Some of you thinking that ARM is going to show up like this are making a huge mistake. I know first hand, even just trying to get people to switch to Linux on x86 is extremely hard because they lose software they like. If you're lukcy you can get WINE working but that's not going to be an option.

The odds of ARM being successful on desktop are even worse than the odds of Linux Desktop being the dominant force.

You lose some good Windows software when you go with Linux. And yes, don't get me wrong, there's really great alternatives that are FOSS, some are even better than closed source ones, HOWEVER people don't want to re-learn how to do something with another tool that is at best marginally better than what they're replacing.

The only way I could even remotely fathom this happening is if a major, major ARM vendor bankrolled porting of large x86 Windows programs to not only Linux, but ARM as well.

Closed source software does not do well in an open source environment. It becomes "that one program" that you can't do anything with besides install binary blobs.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790



Therefore you decided to ignore the data and benchmarks given to you, and instead looked for the entire internet to find something that you believe is supporting your anti-AMD/anti-ARM view:



LOL

Before starting, some basic keywords: low-cost, cluster, ARMv7, development boards, Cortex-A9...

Then some key parts from the review that you omit.

First, you omit how the cluster compared to the Intel Atom and the AMD Zacate.

The PandaBoard ES did better than the Intel Atom hardware that was tested in terms of performance and energy efficiency

Comparing the Effimaß cluster to the AMD Fusion E-350, the Zacate APU had better raw performance but the ARM cluster was the performance-per-Watt leader.

Therefore the old A9 _phone_ chip was able to compete with two desktop/laptops _x86_.

Second, you believe that ARM was outperformed by a top i7-3770k, but you ignore that this review was _not_ about comparing a chip to another chip. It was about comparing an i7 chip against a cheap _cluster_ made with ARM _development boards_.

While this do-it-yourself ARM cluster configuration is not the most effective setup right now, it will be interesting to see how the cluster performance works out for the next-generation ARMv8 hardware as well as the many ARM core servers coming out, such as the upcoming products from Calxeda.

Indeed, the _cluster_ was not effective. Michael built it merely for development/testing purposes. His goal is to port the benchmark suite to ARM, not to build an efficient or fast cluster. In fact, the scaling in the NPB LU benchmark that you pretend to use for evaluating the efficiency of ARM was rather bad:

The last NPB test for looking at the scaling is LU.A. The LU pseudo-application is a Lower-Upper Gauss-Seidel solver. This workload did not scale as well across the cluster with going from one to twelve cores just resulting in a 4.8x performance improvement. However, this was not a failure of MPI or the PandaBoards with the scaling when going from one to two cores on a single PandaBoard ES just yielding a 29% improvement.

The _cluster_ offered only a part of the maximum performance of all the ARM chips togheter. The efficiency loses are related to the cluster setup, such as the 10/100 Ethernet and a shared NFS mount from an SDHC card. Therefore, that review was _not_ comparing ARM to x86 efficiency.

Third, the x86 chips used optimized software, the ARM cluster used buggy software:

Ubuntu 12.10 offers some remarkable ARM performance gains on the OMAP4 hardware due to the newer Linux kernel (version 3.4 at present, compared to Linux 3.2 on Ubuntu 12.04) and the major compiler upgrade (GCC 4.7 vs. GCC 4.6), but due to some early configuration problems with the post-alpha-one snapshot, the installations were reverted to Ubuntu 12.04 LTS. Ubuntu 12.10 will be loaded up on this compute cluster in the coming weeks and should result in double-digit gains.

Fourth, those Fortran-based benchmarks were testing floating point (FP). As I said before, but some people here don't read, FP has been the weakest aspect of the ARM arch. It is not a problem with RISC or ARM, simply you don't need a strong FPU in a _phone_. Therefore, ARM _phone_ chips as this tested A9 have a weak FPU. This changes dramatically with the new A-57 based in the new ARM64. FP has been improved a lot of in the new arch. In fact, the FP performance of the new A57 is about _4x_ the performance of the A9.

You mention how old is Zacate, but Jaguar is not 4x faster than Zacate.

In any case, heavy FP computations will be made in GPGPUs/accelerators. No top x86 supercomputer uses CPUs for FP. The ARM supercomputer will use ARM + CUDA GPGPU. Also, ARM is founding member of the HSA foundation.

Finally, good work of trying to compare the raw performance of a dual-core @1.7GHz _phone_ A15 chip constrained to run in the single digit TDP against a dual-core + HT @2.13GHz i3 allowed to consume ~10x more energy. LOL

Moreover, you gave only a part of the quote, cutting-off the most interesting parts. Whole quotation:

Overall the performance out of the Samsung Exynos 5 Dual on the new Chromebook is very attractive. While Ubuntu on the Chromebook isn't perfect (the broken touchpad and sound support, etc), for those looking towards the ARM Cortex-A15 for development purposes or as a test bed for experimenting with Linux on ARM, the Samsung Chromebook is a very attractive bargain priced at $250 USD.

It was surprising to see the wide performance margin the dual-core 1.7GHz A15 had over the quad-core 1.4GHz A9 in the Tegra 3. In a majority of the cases, the Samsung Exynos 5 Dual also easily beat out all of the tested Intel Atom processors. And then there was the Intel Core i3 330M, which was faster, but on the performance-per-Watt this would be a very different story. The Core i3 330M has a 35 Watt TDP while the Exynos 5 Dual operates within a few Watt envelope. Unfortunately due to the varying displays and other hardware differences, an easy power consumption / performance-per-Watt comparison couldn't be done for this article.

Aka a dual core phone-TDP-rated ARM chip beating a dual core + HT x86 laptop/desktop-TPD rated with higher clocks.



After being corrected about it in innumerable occasions you continue with your pretension of comparing _phone_ ARM chips to _DT_ x86 chips. LOL

Let us use your 'logic', at least during a crazy instant. We take a Temash chip sub-10W TDP and compare the raw performance to a sub-100W TPD Haswell chip. Using your 'logic' we conclude that Intel "blows" AMD away. Of course, this is a silly comparison and the conclusion is invalid, but this is exactly what you _do_ in your anti-ARM crusade.

You forget again that 'my' performance projections are based in data disclosed by both ARM and AMD. It is funny that you accept marketing slides from AMD when they are about Kaveri/Steamroller, but you call them "crazy" when are about Seattle A57. Double standard.

You continue grossly misinterpreting my position and just ignoring what I said.

Who said you that "that ARM will rule the world in the next 6 months"? Offer us the link to the message saying that.

You cannot because this is _your_ claim and only from you. The fact that you continue attributing this kind of crazy statements to others, after being corrected about it, and after being warned about stopping this silly miss-attribution clearly indicates which is your real goal here. I predicted it above.

I wrote above some basic thoughts about the phoronix review of 12 ARM old phone cores _cluster_. Keywords here are _cluster_, _phone_, and _old_.

I find funny that you mention "he who shall not be named" because both of you are using the same tactics. He also ignores what is said and repeat the same mistakes/nonsense despite being corrected again and again.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Nvidia has shown BF3 running in Tegra 5

http://wccftech.com/nvidia-demonstrates-gen-tegra-5-logan-soc-running-battlefield-3-calls-ipad-graphics-vintage-1999/
 
Yeah they do it by adding eDRAM on another chip that's in the same packaging as the CPU. It's 128MB of super fast memory that acts as a cache and graphics buffer. It also seems to increase the price which kinda of defeats the purpose of such chips. In the upper mainstream / high performance segment your going to be buying a dGPU, it's in the low budget / lower mainstream segments that your going to be looking to save money by going with an iGPU / APU setup.
 

Sigmanick

Honorable
Sep 28, 2013
26
0
10,530
So how many people thought the Origin PC story sounded like a set up?
Apparently Charlie has some opinions about the matter.
http://SemiAccurate

I do like Charlie's articles and find them accurate more often than not.

Key Points: Why would a small boutique shop put out a press release for a simple inventory change.
Other PC builders may or may not have been called by an Nvidia Rep named Bryan (not confirmmable as no names are used - so falls under hearsay.
Nvidia has no real answer to R9 series until Q4-2014

Also, China has lifted their nationwide console ban. More Xbone and PS4 $ale$ = more money for AMD R&D. Maybe we will see steamroller on AM3+. http://streetinsider
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860
@juan.

your pretty delusional if you consider atom and zecate a DT part.

Aka a dual core phone-TDP-rated ARM chip beating a dual core + HT x86 laptop/desktop-TPD rated with higher clocks.

not to mention this is a 2013 samsung cpu vs 2010 Intel cpu.

As for ignoring what you say, its because its coming from you. You don't provide sources other than marketing.

Nvidia has shown BF3 running in Tegra 5

http://wccftech.com/nvidia-demonstrates-gen-tegra-5-log...

Phenom II 550 x2 can play BF3 single player and keep up with Intel. That doesn't mean you can actually play the game "the way its meant to be played" on a multiplayer server.

Battlefield3.png


 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
@Sigmanick I read both Charlie and PCWorld articles and one of them (I am not going to search now which one) gave a link to Bryan page on Nvidia people directory.
 

the origin pc story is just a nvidia+origin pc combo pr stunt that will soon be gone and forgotten.

lifting of console ban is both good and bad for amd. good, because amd gets a cut from china sales, their semi-custom business will be happy.
bad, for amd gaming, because all the cheap desktop, apu based pcs, laptops powered by amd cpus, amd apus and gfx cards might take a hit. people might not pick up new pcs or upgrade to pricier gfx when they can game on consoles just fine, and likely at a cheaper price (unless china forces console prices up too high). intel will likely take an even bigger hit.
it'll likely weed out the entry level pcs that have been lingering in people's homes for years e.g. athlon ii x2/x3+760 mobo, llano pcs, phenom i x4 pcs, intel c2d, c2q, ivy/sandy bridge laptops (hd3k/4k igpu) etc.
it'll be interesting how the competitive pc gaming reacts to console availability.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I said "laptop/desktop-TDP" rated. It is pretty evident you didn't understand.

Also the same reviewer was impressed with results he got, and shared his enthusiasm about forthcoming ARM products from Calxeda and AMD...



Do you mean a 2013 chip released backward in time (2012) thanks to a time machine?

Also if you pretend that a more modern Intel chip would win, then you again ignore how a dual-core Apple humiliated, some few weeks ago, the more modern and strongest quad-core attempt by Intel.

Even Intel fans are accepting the hard facts:

Dude, even I will admit that it seems Intel needs to admit defeat to ARM. 1.3GHz dual core A7 with Smartphone class TDP is beating 4 core 1.4-2.4GHz Bay Trail with Tablet class TDP. And that's "28nm" vs 22nm.

The IPC of the A7 is on par with Ivy Bridge parts! Clock that sucker to 3GHz and it'll have a no worry replacement for the Macbook Air line and threaten even with x86 emulation.

AMD heads are not fanboys, AMD is replacing its own Temash tablet by ARM tablets, and its own jaguar servers by ARM servers. Everyone is applauding the move, except ARM-haters.



In your imagination? Sure.

In the real world I have provide third-party benchmarks/comparisons as well. Your insistence on ignoring them doesn't imply that they will disappear ;-)



You missed the point once again.
 

Ranth

Honorable
May 3, 2012
144
0
10,680


I don't see this happening.. Why would AMD develope Temash if it is going to be replaced shortly afterwards, to me that seems like a waste.
 
Also if you pretend that a more modern Intel chip would win, then you again ignore how a dual-core Apple humiliated, some few weeks ago, the more modern and strongest quad-core attempt by Intel.

Even Intel fans are accepting the hard facts:

Intel crushed in CPU only tasks, and was middle of the pack in graphical tests.

PS

The Intel iGPU stinks.

So understand: You are comparing platforms, not CPU's. In purely CPU based benchmarks, Intel crushes ARM, even at the low end. Its lack of a competitive iGPU is holding it back in graphical benchmarks.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810



Looked very turned down graphics and hardly any movement. Was even a shot fired? Half the demo there weren't even characters on the screen.
 
Something interesting (or astounding) here:

http://www.geforce.com/games-applications/pc-games/call-of-duty-ghosts/system-requirements

Minimum System Requirements

OS: Windows 7 64-Bit / Windows 8 64-Bit
CPU: Intel Core 2 Duo E8200 2.66 GHZ / AMD Phenom X3 8750 2.4 GHZ or better
RAM: 6 GB RAM
HDD: 50 GB HD space
Video: NVIDIA GeForce GTX 550 Ti / ATI Radeon HD 5870 or better
Sound: DirectX Compatible Sound Card
DirectX: 11
Internet: Broadband connection and service required for Multiplayer Connectivity. Internet connection required for activation.


Recommended System Requirements

Video: NVIDIA GeForce GTX 780

WTH? I'm hoping these are just placeholders. A recommended spec of a C2D paired with a 780 GTX? That's one serious GPU bottleneck if true...

Also indicating CoD ain't scaling worth a damn. But this would go beyond even my pessimism.

The worst part is the game looks "meh" anyway.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


I guess ones definition of humiliated varies. Nice cherry picking of comments. Here's another.


"lol. The A7 wins exactly 1 benchmark and none of them are multi-threaded. Baytrail-T is a quad-core. I admit that single-threaded performance is more important but it's more or less a tie there while BT would destroy A7 in a multi-threaded benchmark since it has double the cores."
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


As it was said before:

AMD has historically had a hard time getting its low-power chips into tablets. The old Brazos-based Hondo APU barely barely made it into any slates, and the newer Temash and Kabini chips don't have many design wins. The lowest-power Temash variant still has a relatively high 3.9W TDP, though; AMD can surely do better with ARM-based chips. Radeon graphics technology should make those offerings unique, and it could help AMD gain a foothold in the tablet market.
 

Ranth

Honorable
May 3, 2012
144
0
10,680


That is not what I am asking.. This --> why even go through the hassle of designing a freaking cpu architecture if you shortly afterwards will replace it with ARM, that makes no sense, why not go with ARM in the first place then?
 
Status
Not open for further replies.