AMD CPU speculation... and expert conjecture

Page 298 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

i hope so.. cautiously.. it's wccdefghidon'tknowhowtospelltechbutirepeatanyway.com after all. i remember them going crazy over a very early leak about a trinity synthetic benchmark numbers which later turned out to have benched with a 7950...
this is why i always have NaCl ready.. y'know, when you mix up NaOH with HCl(geddit?).. okay this chemistry bit is going sideways....
i still want them to feed the igpu properly. i mean... intel showed that with EDRAM (i.e. high bw), you can get higher performance with weaker shaders.

actually... hawaii and high end discreet gfx cards are a different matter. for example, if amd skipped pcie gen 3 support citing their own cpus have only gen 2.0/2.1 and gen 2.0 are not yet saturated, amd coulda lost marketshare to nvidia. it's not about technology or performance, it's about showing checklist items on a promo slide to keep up with your competition and hold on to marketshare and... make moniez. majority pcs that do gaming (not strictly gaming pcs(in terms of config)) have intel in them and since ivb, intel put in pcie gen 3.0 support. so amd went along with it. bottom line, amd doesn't need FX for gfx division to survive, their gfx dept has been doing pretty well without amd's high perf. cpu lineup's help.
hpc and high revenue sectors are different though. i remember reading somewhere that pcie 3.0 does show some kind of advantage (in xeon pcs, mostly) to those cards benefitted from it.
i wonder how hsa's introduction will influence amd's future discreet gfx though. amd is already seperating their mobile dGfx for their own apus - sign of things to come.. i assume.


 

8350rocks

Distinguished


Phenom III FTW!!!!
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780


Yup. People show numbers that desktops in general are losing marketshare while ignoring the fact that gaming PCs are increasing in numbers and then saying AMD is leaving the dying DT market. Do you see the failure in logic there?

AMD wants a gaming platform, they have for a while. Remember the hype around FX 9590 being a part of the "ultimate gaming machine" with Radeon GPUs?

I do feel a lot like AMD is going to start releasing GPUs and CPUs together so you can just flat out buy a full AMD rig as a single platform with the latest stuff. Imagine you get free games with Gaming Evolved and then if you couple it with an AMD FX you get more games, and those games are all games which run well on FX CPUs.

Not to mention Intel has stopped caring about increasing CPU performance and they instead are more concerned with GPUs and power consumption. It makes things a lot easier for AMD to catch up, because if they try and make a faster chip, they're not really competing with Intel because that's not what Intel cares about anymore.

Also, people who think that AMD is going APU only are thinking that AMD saying they will "focus on APUs" means that AMD will give up everything that's not an APU.

Intel focuses on "mainstream" socket and not "enthusiast" LGA 2011 socket and it doesn't mean Intel has given up on "enthusiasts."
 

@both:
going through amd's site, i came across this old link, shows an amusing bit of info:
http://sites.amd.com/us/game/technology/Pages/crossfirex.aspx
check out the test rig amd's 'performance lab' used. :)

amd does have designations for all-amd systems. does the name 'scoripus' ring a bell?
http://sites.amd.com/us/promo/processors/Pages/operation-scorpius.aspx
this link has other system names with some benchmarks(!)
http://www.nordichardware.com/CPU-Chipset/amd-scorpius-and-vision-black-fx-benchmarks.html
for example, this one below is a 'scorpius' system. :LOL:
http://www.tomshardware.com/reviews/fx-overclock-crossfire-ssd,3098.html
good times..
 

crimson87

Honorable
Oct 6, 2012
53
0
10,630


I believe Intel's situation is different. I am going to build a rig on the next 18 months more or less to wait and see what the consoles can deliver while I wait for DDR4. In that time I am sure Intel will offer a processor 10% faster but with a graphics core 40% faster (at least) than Intel HD 4600 leaving it more or less at 6670 (DDR3) level (even more if we take into account what DDR4 can do to integrated graphics). That's a pretty decent card to play games released until now or Indie games from the next couple of years. Most important of all , I'll end up having a beast processor that i am almost sure , will end up powering a SLI setup of Volta high end cards to dominate all games released on this generation.

If in that point in time I go AMD instead , and get their only offering (APU). I'll end up having something as good as a Xbox One / PS4 in graphics performance in numbers (which will be entry level for pc by that time) and a decent processor for every day use or basic gaming that won't be able to power a high end card.

My point is that with Intel you can do this 2 step stategy since they offer entry level graphics and great processing power. With AMD , there is no room for a significant graphics upgrade. You may think you'll end up getting more bang for your buck at first but then , you'll have to build a new rig and end up spending more money.
 

8350rocks

Distinguished


Intel updates will require a new MB each time from here on out, per their announcement regarding broadwell being mobile only, and the haswell refresh due next year will have "technologies that will not coincide entirely with the Z87 chipset".

It's another way to rook you out of money basically.

AMD will have an option moving forward...you forget, the iGPU is on die on the APUs as well, meaning you can get the next generation, more powerful APU on the same MB going forward.

If you buy FM2+ now, Kaveri and Carrizo should both be on FM2+ based on current trends looking forward. The same, unfortunately, cannot be said of Intel.
 

montosaurous

Honorable
Aug 21, 2012
1,055
0
11,360
I don't foresee anything exciting from Intel until maybe Haswell-E or Skylake. The Haswell refresh should be boring, as well as Broadwell (if it is even released on the desktop). If you have a need to upgrade, and have the money, do it now. There really isn't too much worth waiting for (except Volcanic Islands maybe).
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860


This is why the desktop market is dieing. Has less to do with people not using DT but more to do with no one has a reason to upgrade.
 

Up until the i5-4430, Haswell is a decent buy. For an enthusiast grade system, Ivy is superior due to temps when OCing.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


You cannot compare results obtained by two sites, using different software, different confs,... but you can compare the leaked benchmarks for Richland and Kaveri, which imply (if all this is true) that Kaveri has improved memory controller by about a ~33%.
 

i don't quite understand what you're saying. i thought i was comparing sandra's memory bw benchmarks of toms to that of wccefghwutheHistech.com's. if anything, tom's richland test pc seemed better configured and set. i am not disregarding underlying hardware and software tweaks nor the possibility of a fake. that's why i add 'if' to my speculations despite it seemingly being very disappointing. i call it disappointing because as launches get closer, leaked info tend to get more credible, even if it's from one of those 'ad verbatim' websites.
if there's 33% improvement, it coulda come from any number of sources. for example, validation for higher spec ram at stock settings like richland's hype about ddr3 2133 ram and 'improved ram perf' which turned out to be very specific case where 1.5v ddr3 2133 ram with only amd a10 6800k was used.
it could also be my lack of in-depth knowledge on hardware that's preventing me from making a better guesstimate.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


You cannot compare Kaveri bandwidth on site A to Richland bandwidth on site B.

You have to compare Kaveri bandwidth on site A to Richland bandwidth on site A.

If the benchmark on site A are right. Kaveri comes with a ~33% better memory subsystem than Richland.

Now go to toms and multiply _their_ Richland bandwidth numbers by a ~33% and that must be close to what Kaveri will hit when was benchmarked by toms.
 


True, but why would you need to upgrade CPU's? Its not like you're buying an AMD chip and need to upgrade every six months as you try and catch up on performance...
 
I think just the contrary. Intel is at the peak of the x86, whereas ARM is starting to release its true potential. Look at AMD, they have benchmarked the new A57 core and found it is better than its own jaguar core, which was already an optimal peak in x86 _64 bit_ space.

Better at what? On what software? Maybe the X86 code is REALLY unoptimized compared to the ARM code. Etc, etc, etc.

Again people: X86 isn't going anywhere because of the 20 years worth of software. So please, this whole "ARM is going to take over" is just nonsense.
 
SandraSoft, Staysoft, BabySoft, DryMark, FurMark, 3dMark means absolutely nothing to me simply because real world is deceptive to synthetics and my case in point was my youtube microstutter video of a 2600K and GTX 580 hitting micro stutter patches, simply put Intel are king of synthetic but at the end of the day its so hard to discern a AMD from Intel part where one is not gimped over the other.
 

8350rocks

Distinguished


You and I don't often agree...however, I agree totally with this statement. If ARM eventually replaces x86, it would be a good 15-20 years out likely. The software ecosystem for ARM doesn't have much in the way of a full blown DT OS, and likely won't for some time to come. So saying "ARM will rule DT soon", is like saying M$ is going to release a 64 bit only OS this year...it won't happen. You can say it many times...(we all know M$ has talked about full 64 bit support for 10 years now...) it still doesn't make it any more true than it was the first time someone said it.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Better at raw performance and performance per watt. I have no reason to believe that x86 code is not optimized.

ARM and Apple will be switching to an ARM world. Nvidia is preparing an entire range of ARM-based products from phone to supercomputers, including servers, tablets, desktops... and several of us are convinced than next gen game consoles will be ARM based. This gen was close to being it.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


The DT Os have existed for years. When more and more people switch to ARM for desktop, server... more options will exist of course. The only who believes that MS will play some role is Nvidia.
 

why not? raw memory bandwidth isn't software dependent. despite that, it's the same benching software and same test, even.

i'd rather not multiply tom's richland bw measurement. multiplying doesn't even make sense. 85~ GB/s is gddr5 level memory bandwidth (using max. 16 GB/s as baseline). that's how much bw the igpu actually deserves, to flex it's muscle.
i can't use linear improvement either, because that makes kaveri's memory bw look worse for the igpu it's fitted with.
using toms' hsw/richland memory article as a reference, if i add (assuming linear 33% improvement) to maximum measured 16GB/s (with ddr3 2133 ram) figure i get roughly 22 GB/s - not very encouraging considering what the competition gets out of the same spec. still, amd's shaders are so starved for bw, at 22 GB/s they'll fill 'em with higher performance. not the performance they should be able to deliver, though.
 
Better at raw performance and performance per watt. I have no reason to believe that x86 code is not optimized.

But my point is this: You aren't running the same code. You are running the ARM compiled version of the SW, running an on ARM OS, against the X86 version of the SW, running on an X86 OS (likely Windows).

So saying "ARM is getting more performance" is a "little" bit disingenuous. You'd have to figure a good 10% or so margin of error in results simply due to the different SW and platforms being run on.

The DT Os have existed for years. When more and more people switch to ARM for desktop, server... more options will exist of course. The only who believes that MS will play some role is Nvidia.

OS's without non-exclusive X86 support for the Desktop comprise <2% of the market.

I rest my case. This is no different then the argument that PPC would replace X86 over two decades ago. Apple moving the PPC was "proof" that X86 was going to die. Funny how that worked out, isn't it?

There is interest in ARM simply because X86 does not scale downward particularly well. This is nothing new. That being said, Windows itself doesn't scale downward very well either. X86/Windows is going to remain the desktop platform of choice, and ARM/Android will remain the mobile platform of choice. The end.
 

anxiousinfusion

Distinguished
Jul 1, 2011
1,035
0
19,360


"x86/x64 will not die. Contrary to popular opinion, this is not bad. I've worked on the instruction decoder and there is a lot of FUD out there. Every time we look at adopting another ISA, it makes no technical sense."
- Intel CPU architect
 

8350rocks

Distinguished


Let's get a few things straight here:

1.) ARM != better raw performance. You can't scale ARM upwardly as well as x86. It also does not run the same density of code...x86 can run some *very* dense code. ARM does not have the means to do all of that yet. Which leads to my next point...

2.) It is better P/W right now up to a certain threshold. Once they start adding the things that x86 has as advantages, the P/W is going to go down the toilet. Because the hardware will start to become bloated, much like the hardware for the very mature x86 has now. It's all necessary in this day and age, but it's not conducive to good P/W.

Recall back when GPUs didn't need an extra power connector in a PC? They do now...

Recall back when a DT chip worked in a Laptop? They don't now...

There are reasons for this. Power consumption grows rapidly as you start adding things like quad channel memory controllers, and other features that ARM doesn't have, but x86 can/does. When ARM has those features, it will not longer be an attractive low power option because of the complexity and increased power draw.
 
i don't understand why future-complexity of the hardware always gets disregarded. the hardware companies don't care if we have 'good enough' performance/experience right now (we do and have had that for a few years now). they care about selling more stuff every upcoming year.
 
IF in 5, 10 or 20 years everything goes cloud it will make little difference if you have x86 or ARM or something else but in the short term why would ARM bother trying to compete in desktops when its a shrinking market everyone else is moving away from. It does compete in the netbook side of the market.
 
Status
Not open for further replies.