AMD CPU speculation... and expert conjecture

Page 601 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

8350rocks

Distinguished


I read that bit...

Essentially, NVidia is still using in-order-execution, and they expect to optimize software such that it makes up for the performance deficit by not going OoO. Additionally, power consumption will be low, but that is because the advanced branch prediction logic required to run OoO execution is markedly absent.

Notice how those benchmarks shown are all either memory benchmarks, or generic integer/FP benchmarks.

I will be interested to see what it really does in terms of performance, simply because I am curious how much Nvidia overstated their performance with the propaganda this time around.
 


It will depend a lot on task. Tight loops you can easily optimize in SW and get good gains, but other processing? Not as much. It'll be VERY application dependent.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860
Nvidia denver @2.5 ghz compared to a 1.4 ghz celeron to barely pull ahead... man thats soo fast that a full haswell oc @ 5 ghz will never be able to catch a 3 ghz denver cpu.

Marketing wins again.
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780


Ah, so the numbers come out. Compare to Intel's lowest end model with no features and then talk about performance. Haswell Celeron has NO: Hyperthreading, AVX, AVX2, FMA3, Turbo, TSX (although admittedly Intel just admitted it's completely broken on Haswell and will be disabled).

If you're looking at custom server solutions where you're compiling your own software, the lack of AVX, FMA, etc for Haswell Celeron really hurts Intel in this comparison. You forget you are talking to CISC add more instructions fanboy here who runs Gentoo and has a very, very good experience with "MOAR INSTRUCTIONS MUH march mtune and -O2/O3/Ofast" on Intel and AMD hardware both.

So really, Nvidia is matching Haswell as long as it's not using any AVX, FMA, or all of the core with Hyperthreading. Basically, Nvidia cherry picking to look the best. To see what AVX2 does, look up some Dolphin emulator benchmarks :)
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790

Correction Denver has 8-wide decoder and peaks above 7 instructions per cycle.

Denver is a VLIW 256bit wide engine. It seems evident that Nvidia could replace the ARMv8 decoder by a x86 decoder and run x86 software if Nvidia had a x86 license.

Denver core has 256bit SIMD. This gives ~80 GFLOP/s for the SoC. Quad-core jaguar Athlon 5350 SoC peaks at only ~64 GFLOP/s. Seeing how a ~8W phone-like SoC outperforms AMD low-end 25W desktop, it is now evident why AMD avoids this competitive market as the plague.

Finally, mention that Nvidia explicitly said at Hot Chips that Denver cores are aimed at "content creation".

Toms has article on it with further thoughts and the benchmarks from the white paper

http://www.tomshardware.com/news/nvidia-denver-tegr-kepler-mobile,27429.html

 

8350rocks

Distinguished


Only because those types of software can be more easily optimized to cover up the inefficiency of in-order-execution.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
Intel demoes DX12. DX12 gives ~50% more FPS than DX11 maintaining power consumption or maintaining FPS the power consumption is reduced by ~50%. Last case brings more lifetime to batteries, evidently.

http://blogs.msdn.com/b/directx/archive/2014/08/13/directx-12-high-performance-and-high-power-savings.aspx

AMD says 'no' to SteamOS and VR

"I have to say I find [Steam OS] quite hard to figure out, in the same way the future of VR is difficult," said Huddy.
http://www.gamespot.com/articles/future-of-vr-is-difficult-says-pc-chip-maker-amd/1100-6421595/
 

szatkus

Honorable
Jul 9, 2013
382
0
10,780


Not everyone buys CPU with iGPU.

Also we use GT 430/630 at work despite having i7-2600/3770.
 

jdwii

Splendid


Then explain intel's superior market share in graphics? If they bought a CPU today without a GPU they have to be buying a fx cpu and wtf is doing that for anything that low level? Old computers maybe but then again if their igpu isn't good enough i doubt their board will even support PCI-E.
 

szatkus

Honorable
Jul 9, 2013
382
0
10,780


CPUs without iGPU:
Athlons
Phenoms
FX
Everything on LGA2011

PCI-E is standard feature for... I don't know... last 10 years?
 

anxiousinfusion

Distinguished
Jul 1, 2011
1,035
0
19,360


Those only markets for those at this point are businesses or individuals on older machines that need a GPU upgrade on the cheap.
 

jdwii

Splendid


With it being only 19 watts i guess its not bad but it won't do much besides simple HD video playback something a 250$ build can do. Pretty sure Amd is better for the price/performance with the R7 240 and the power consumption on them are 30 watts.
http://us.hardware.info/reviews/5061/6/xfx-radeon-r7-240--250-passive-review-no-fan-low-price-dirt-showdown--far-cry-3--hitman-absolution
 

jdwii

Splendid


Directx 12 will be nice it sucks Amd has something great but doesn't want to use it for linux.
 

wh3resmycar

Distinguished


from my experience, computers used by traders usually comes equipped with a discrete GPU as they power at least 3 or more monitors at a time even when armed with a first gen i7.

intel's market share is misleading, yes they are ahead in volume but not all of those GPUs are being used.
 

etayorius

Honorable
Jan 17, 2013
331
1
10,780



I thought you said MANTLE was in fact coming to Linux, does this means AMD is giving up on MANTLE Linux/SteamOS?
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


And it was coming, somewhat as Carrizo for desktop was coming (check AMD desktop roadmap in my twitter account), as Kaveri 3-module GDDR5 version was coming, as 16-core Seattle was coming...

I have to admit that these days neither AMD knows what AMD will do. One day K12 is the biggest thing in the universe and other day it is just A57 scaled up; one day FX CPU will do a comeback with 10 cores and another day it is 16 cores, and a week after it is 20 cores; one day Zen is a SMT3 arch and another day it is SMT1.5 arch; one day Kaveri will be replaced in 2015 and other day Kaveri will be extended towards 2016, and when you ask again then 2017 is mentioned on the desktop roadmap; one day AMD is enthusiast about SteamOS and other day they say "meh".

I am sorry but things are as they are. I cannot do anything more. And I will stop asking my contacts for more info about future AMD products because I don't like to look as a stupid when I post something in a forum just to see that disproved/canceled a pair of weeks latter. Neither I like the way that the company is being run lately, it seems that the unexpected (?) return to red numbers in last quarter finances has changed things internally. I can say you AMD is actually broken into two groups: one group supports Rory management and other group dislikes it. And I cannot read more...
 
juan, 16 core seattle was never a possibility. arm has set limits on going beyond 8 cores with their cpu i.p. from the begining. it's is possible to build a 16 core seattle, though. amd was very careful to keep 16 core seattle out of their roadmaps. actually, their cpu roadmap never shows core count. almost all of the 8+ core socs i've seen so far contains some sort of customization in the cpu-cpu bus or in cache. seattle doesn't have that afaik.
 

yes. and there's this:
http://www.anandtech.com/show/7721/arm-and-partners-deliver-first-arm-server-platform-standard
A few examples of the standard:

The base server system shall implement a GICv2 interrupt controller
As a result, the maximum number of CPUs in the system is 8

edit: there may be several reasons for this. the first is certainly moniez. if you build a 16 core arm and it doesn't live up to the (over)hype, it's just a waste of silicon. 16 core adds to die area, power use, increases cost, affects yields, raises end cost. arm didn't want to introduce a 16 core initially as they'd want customers to buy "upgraded" (in reality feature-neutered like lga1150 vs lga2011) 16 cores with fully enabled features.
second is also moniez. arm is hyping these for undercutting intel cpus. however, seattle's target machines' costliest part is the memory afaik. so it's not feasible to increase cpu/soc costs.
3rd, also moniez, unfortunately. since arm's i.p. doesn't allow higher scaling, your only(!) way is to go custom. and for that you need an i.s.a. license which incidentally sits at the top of the royalty ladder. cavium, lsi did it afaik. i don't think amd could afford this right away, at it's current financial situation. notice that amd's arm roadmap has a57 based devices throughout 2015.
there may be other reasons, but these are what i see within my limited scope.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Yes the A57 core has been designed by ARM to support up to a 16-core configuration now. But in reality I was referring to AMD original presentation of Seattle:



Not only the 16-core version has vanished in the air but in some few months Seattle has morphed into a processor "with four or eight ARM Cortex A57 cores".

http://community.amd.com/community/amd-blogs/amd-business/blog/2014/01/28/amd-announces-plans-to-sample-64-bit-arm-opteron-a-seattle-processors

http://www.amd.com/en-us/press-releases/Pages/64-bit-developer-kit-2014jul30.aspx

http://ir.amd.com/phoenix.zhtml?c=74093&p=irol-newsArticle&ID=1894373&highlight=

As I mentioned above the forthcoming FX CPUs will be 8 core or 10 core or 16 core or 20 core or 3.1416 core... depending of the color of the socks that you have when you ask and the phase of the Moon the day before.
 
Status
Not open for further replies.