AMD CPU speculation... and expert conjecture

Page 729 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Socket Invalid

Reputable
Mar 7, 2015
3
0
4,510

I think AMD is heading in the right direction with their A - series. Powerful processor with a magnificent graphics card all in one? Lovin' it. (I know it's off topic lel)
;)
 

yes, the downplaying is coming from nvidia, not just from tevrett per se. however, his statement and affiliation (and prior actions of said affiliated vendor) present a strong connection that i cannot ignore.

you know what that color coding in your posts mean. :)

i wanted to ignore the rest of the post but did for completeness's sake.
from your own source:
even if it is filled with far too much Nvidia evangelism.
see, that's what i mean when i describe nvidia's actions incl. them downplaying mantle's contribution.

in the end, no matter how bitter you're with amd or how many quote you throw at people or other stuff, you cannot force your own conclusion by attempting to win in this kind of argument. that's how strongly amd's efforts have affected everyone concerned. you may deny it, that's your prerogative (i am attempting to terminate this argument here because the facts won't change).
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


I think (hope) most people can see through the corporate politics. Mantle went as far as it could being an AMD creation. Once they passed on the core of it to Khronos and give it a new name, swap in/out some features, other companies can get behind it. Squabbling over the size or percentage of the donation is just petty. We all benefit from it in the long run if you use any nix systems.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


You want reduce everything to a Nvidia vs AMD fight, but the words you quoted are coming from the president of the Khronos Group, are found in the news archived in the Kronos group website and are supported by other members of the group also quoted in the new release.

Call me when you have a real argument.
 
AMD to introduce its Radeon R9 300-series lineup at Computex
http://www.kitguru.net/components/graphic-cards/anton-shilov/amd-to-introduce-its-radeon-r9-300-series-lineup-at-computex/
meanwhile, nvidia let maximum pc take pics of it's new titan x gfx card. pc gpu war is slowly heating up.
kitguru doesn't mention the recent claim that amd has excess stock of current gen cards. if amd doesn't clear the excess cards they risk in-competition with the new cards. how the heck did amd come up with an entirely new lineup. i've been expecting only one two new flagship and the rest being rebrands ever since the news broke.


+1. this is corporate politics.
amusingly, the people singing mantle's death seem to be mostly tech journalists or i.s.v. people. trevett was the only one close to an authority figure who downplayed mantle's contribution and he incidentally happens to be an nvidia v.p. i could go full conspiracy theorist and twist the events into a subtle smear campaign against amd's achievements :pt1cable:. i wonder if it's too far fetched with the new gfx card launches so near. :whistle:
 

jdwii

Splendid



Not off topic at all, APU's are still great if you want a computer that can game without a GPU and with a small itx board and such, BUT on a budget if not who cares about them.
If not on a budget a small profile 285-970 and I5(+ ITX) will use 250 watts on load and be fast and quiet as well.
Actually going to be building a 300 PC for my dad its going to have a haswell pentium with 8GB of ram and a SSD. Will probably use 20-25 watts most of the time.
I could probably drop to 4GB of ram and include a unlocked pentium and have a 750Ti for 400$ for budget gaming machines. I dropped my I7 down to a dual core and a lot of games still played fine but some did require a bit work like far cry 4.

A CPU for gaming really needs 4 threads even if its done using HT.
 

jdwii

Splendid


Great links but man that first one was depressing, and yes i think Mantle is dead but it was reborn for the better its not like Amd lost they got what they wanted but people can use the API on everything not just a company with limited market share. One thing is for sure developers care more about these new API's then 10,11 ever seen.

The challenge for Advanced Micro Devices is that it needs to achieve two rather different (and sometimes opposing) tasks in the first half of this year. It must significantly reduce inventory in the channel and it has to try and boost its dropping market share. If AMD does not ship anything new, its own sales and market share will decline. However, if AMD ships brand-new offerings, older hardware stalls in the channel, impacts prices and makes it difficult to sell new products.
 

con635

Honorable
Oct 3, 2013
645
0
11,010

Would you not consider used market for a budget build? Last new budget build I done was built around fm2+, athlon and 260x, for same-ish money I've got my brother-in-laws used build nearly finished, b85, antec psu, haswell i5 and maybe a 280 or 280x not bought gpu yet. My own sig rig is used, bought from local 'buy and sell' and ebay, cheaper than i3, 750ti, even psu was cheaper than cx430. Of course its not for everybody but I think its insanity to buy new atm if on a tight budget, eg 280x's are 100-140£ here at the moment.


 

jdwii

Splendid


Oh yeah i easily would consider the used market. Linus made some good videos about it(favorite from him) and he bought parts from the used market for gaming. 400$ is easily enough if you are willing to wait and get the best deals.
 
Great links but man that first one was depressing, and yes i think Mantle is dead but it was reborn for the better its not like Amd lost they got what they wanted but people can use the API on everything not just a company with limited market share. One thing is for sure developers care more about these new API's then 10,11 ever seen.

DX10 really didn't add anything *new* to the API. It was more of a re-design to make the API work with the new driver model, which BTW was a VERY good thing, when you consider how piss poor GPU drivers were back on XP. 11 added a little (Tesselation), and put in the first attempt at multithreaded rendering, limiting though it was. 12 is adding the ability to code at lower levels to improve performance.

Notice the lack of new graphical features? That's because there's very little left to do that isn't too computationally expensive. Any additional features basically requires Ray Tracing. There's really nothing in DX12 that DX9.0c couldn't do, just with a lot lower performance. That's where DX has been going for a few years now.
 

8350rocks

Distinguished


Actually...if you are talking about extrapolating numbers toward a bigger resolution, then 1080p has little to do with anything. 8K would be far more accurately extrapolated by performance on 4K. If your frame buffer and processing is not good at 4K, but excellent at 1080p, then you have a poor design that has issues with efficiency of frame buffer utilization, and also likely bandwidth problems.

Perfect example:

R9-295X2 can play many games reliably at 4K between 30-60 FPS

R9-270X can play many games reliably at 1080p between 30-60 FPS (or more)

By your logic, the 270X should be just as valid at 8K as the 295X2 because we are looking at 1080p and 4K.

Do better "research" next time, your non-engineer is showing.



Truthfully, you are not contributing anything useful to the discussion besides venom...and your alter ego jdwii is not helping much either.

There is an Intel thread if you want to bash AMD.

For all of you who own AMD hardware and complain...aside from benchmarks being slightly less, are you honestly unhappy with what your hardware can do? If so, why?



Actually, it is about 10-12% on average, depending on the task, 15-20% is not out of the ballpark. The issue is, you lose some of that progress with lower clocks...



If they can pull 40% more out of Zen, they will literally be on par/better than haswell, and not far behind skylake.

However, you are doom and gloom over it?

Think about what you said 40%. Considering that since SB, Intel has not gained more than 30% improvement in 3 generations...you are talking about a bigger increase than that in one leap.

That would be an engineering marvel above and beyond the likes of which even Intel could claim to have pulled off...ever.

Yes, they are shooting for 100% more IPC. If you are going to aim big, shoot for the stars, at least you will hit the moon. In reality, I think you are honestly pretty close on this prediction; however, I suspect it is more pulling numbers from a body orifice than actually having anything concrete. Time will tell, and we shall see...
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I didn't mention 8K, you did. Most gamer will play at sub-4K resolutions not at 8K.

Also an extrapolation requires at least two points, in this case 1080p and 4K. You cannot extrapolate using only 4K as you pretend. Check some basic math

http://en.wikipedia.org/wiki/Extrapolation#Linear_extrapolation



Nope. 40% implies it will be behind Haswell and far from Skylake. Note this 40% is only for integer. The FP gap will be huge.



This is all wrong. First, increasing 40% IPC over Piledriver, starting from scratch, is feasible (but hard) because the Piledriver arch is a complete disaster and because a blank piece of paper means engineers have no backward compatibility compromises. You confound that with Increasing IPC by evolving/tweaking an existent architecture as SB.

In fact whereas improving 40% over Piledriver starting from scratch is a feasible target. AMD is only improving 5--15% per gen by evolving/tweaking Piledriver. Check Steamroller and Excavator. Excavator brings a mere 5% over Steamroller.

The same comments about Intel, improving 5--10% per gen by evolving evolving/tweaking SB is just as expected, but with one remark. SB is actually one of the best archs in the world (only IBM can compete) and adding 5% on top of Haswell is much more complex and difficult than adding 5% on top of Steamroller.

100% over Piledriver is pure nonsense even starting a new design from scratch. Some day I will say the name of the engineer behind the hype. I guess you already know whom he is.
 

etayorius

Honorable
Jan 17, 2013
331
1
10,780
Well i don't know, Haswell is already about 60% ahead of Piledriver in IPC... if we take into account 5% more from Broadwell and 5% (minimum) more from Skylake that is a good 70% ahead of Piledriver, i honestly don't think AMD can manage a 100% more in one generation, i seriously don't buy it.

40% is good and reasonable, i would definitely buy a Zen system if it were out right now considering AMD prices seem to be cheaper most of the time compared to damn Intel, sadly 2016 is waay to late (at least for me and i have been waiting since 2011).

Every little gains in IPC helps AMD, and i really hope they manage at least 40% because anything less would just be a another fail and i doubt AMD can survive another Faildozer.

I really want to see AMD succeed, i definitely do not want to see them go or transform into VIA but the state in which their current CPUs is honestly shameful to be honest, i can understand people who need more than 2 cores using them, but as a gamer i see little need of their piledriver CPUs. Also games are barely starting to use 4 cores, not even MANTLE can take advantage of more than 4 as we seen in benchmarks with the i7 with 8 cores, i was expecting waaay more performance from that CPU, but all the gains in FPS was due to the super fast IPC instead of the 8 cores with HT.
 

8350rocks

Distinguished


What is everyone smoking to claim 60% lead in IPC from intel? Seriously...are you all from Amsterdam, Colorado, or Washington?

http://www.tomshardware.com/charts/cpu-charts-2013/compare,3143.html?prod%5B6234%5D=on&prod%5B5877%5D=on&prod%5B5755%5D=on

There are a few edge cases where the 4770k is about 20-25% faster than the 8350 at stock clocks for both CPUs.

However, that is not 60%, nor is it anywhere near that.

Clock for clock is irrelevant as well...because intel puts a processor out where it is comfortable being compared, and so does AMD. If you want to talk about overclocking something, compare both overclocked, or both stock. Otherwise, you are comparing apples and oranges. As it sits...we are looking at a 25% difference between haswell and FX at best.

Now, if we assume 15% for broadwell and skylake combined...

25+15 = 40%

Zen improvements = 40%

Hmm...interesting...who said they would be close when zen launched?
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780


I think you need to check your math, friend. 10% increase from BD -> PD -> SR -> EX is a 33.1% increase in performance. 15% increase is a 52% IPC increase from BD to EX.

If you assume EX has 15% IPC over SR and SR has 15% over PD, then EX has 32% better IPC than PD. So really, I don't think 40% over PD with Zen is anything exciting. In fact, it wouldn't be much better than EX.

You did it again, Juan, you disproved another AMD product with your marvellous mathematical abilities. Someone give this man a Nobel for his brilliant math already!
 

COLGeek

Cybernaut
Moderator
Civility, please. No need to get snippy with one another. Some of you need to take a deep breath and relax a bit before continuing to engage other members in the thread.

Attack positions, not each other. Thank you.
 


I think it depends on what type of application. In well threaded apps the difference isn't big (actually I think teh FX chips are great value in that context). The issue is single thread, and in those situations the gap can be pretty big (60% isn't out of the realms, I'd have to look at some benches, although I'll admit a gap that large *is worst case*).

You mentioned about anyone actually having issues with AMD kit, well I'm an AMD fan (have lots of AMD based machines both at home at at work- AMD's cheap multi core procs are excellent for cpu based ray trace rendering which is something I do lots). I do have an issue with my home rig however, which is an FX 8320 + R9 280 setup. The cpu really holds that graphics card back in one game in particular (planetary annihilation) compared to an Intel based rig, to the point where I've hardly noticed an improvement from the Phenom II X3 710 I upgraded from (which is only clocked at 2.6ghz). The game in question actually appears to perform comparatively better on Phenom II which leads me to believe there is something in the coding that doesn't suit FX at all. Plenty of videos of casts and such have the game running totally smooth (these are games *I've been playing in*) where I'm getting 12fps and the gpu usage is really low (~40%). I've checked everything and there's no specific problems with the machine (can complete stability tests, 3D mark and runs other games fine). I've even tried the game under Linux as I was concerned windows thread management was causing an issue however the performance is pretty much the same.

The sad thing is that during development, the developers reached out to AMD, Intel and nVidia for help as this was a new engine developed from scratch. Out of the 3 companies, only nVidia actually helped- to the level of sending out a software engineer to help with the rendering engine. That said they've got the game running pretty well on Radeon cards, although I'm sure AMD could have advised them how to eek a bit more oomph out of their kit had they responded. I think to some extent AMD are kinda their own worst enemies. Now I'm left with in the position of contemplating getting a Haswell i3 as it will provide a more consistent basis for gaming. I don't care about benchmark results, however I would have expected the fx cpu to be able to keep up well enough to not totally stall out like this.

This is the problem, a difference of 10 fps when a game is already running at 60+ fps means nothing. A difference of 10 fps when your running at 12fps for a large scale RTS like this means the difference between winning and loosing. Essentially I can't compete in large team games on this hardware whereas if I had an Intel rig I would be able to :/

I got the FX due to the fact it was a drop in upgrade (maintaining the platform like this is something I think AMD has got very right btw), so it's not such a bad thing. It's just frustrating as I really don't have a better option from AMD so my only choices now are a: Live with it or b: Go Intel.

AMD can't get that new uarch out soon enough imo.
 

8350rocks

Distinguished
Have you tried overclocking? I have a friend running an old FX6100, and over the weekend he complained about how it was not competitive. He was still running stock clocks at 3.3 GHz. I set him up on aftermarket cooler with 2 aftermarket fans, and gave him a simple multiplier bump to an OC @ 4.0 GHz.

He literally gained 20 FPS in a game based on CryEngine 3 after spending $40 for hardware and $10 for shipping.

Good luck.
 

etayorius

Honorable
Jan 17, 2013
331
1
10,780


Then i have no idea which benchmarks are you looking at:

http://www.anandtech.com/bench/product/1260?vs=1289

http://cpuboss.com/cpus/Intel-Core-i7-4790K-vs-AMD-FX-9590

I don`t know man, did you checked the link you posted? because your own link benchmarks don´t agree with your claims.

Cinebench 11.5 reports FX9590 has 11.5 Single Thread performance while the 4790k has 1.74 which is about 60% more IPC, pretty much in line with what CPUboss and Anandtech are reporting.

May want to check this one out too:

https://www.dropbox.com/s/py1mvuofeihf4sq/xd.jpg?dl=0


We got to be honest, AMD as of now is 60% behind in regards to IPC... it kinda sucks because Intel takes advantage of everyone by charging super premium prices, but that is just how things are now... AMD CPUs as of now gets the job done, the Intel CPUs just smokes the floor with any CPU AMD has to offer.

I do gaming, and even in gaming i seen my older brother 4670k smoke my Younger Brother FX8320 OCed to 4.4 by almost double the FPS in Fifa 15, NFS Most Wanted, Battlefield4 and basically any games, the advantage is real and it is there on Intel side.

My Older brother has a i5 4670k coupled with a GTX750 Ti while my younger brother has a FX8320 coupled with a GTX650 Ti Boost, while the 650Ti Boost is faster than the 750 i am seeing much higher Minimum FPS on my elder brother PC with the titles they both play, though they are both mostly casual PCGamers and i consider myself a hardcore PCGamer and my PC is worst than theirs... shit happens.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Your numbers are plain wrong. Haswell is about ~60% ahead of Piledriver per core and another extra ~20% from the CMT module penalty. This is reason why AMD trows 8 cores @ 4GHz against Intel quad cores @ 3.5GHz...

Even if Zen adds a good 40% on top of Piledriver it will be behind Haswell.



Your post is full of mistakes.

First, I did start from Piledriver. You start from Bulldozer.

Second, you are assuming that you can add 15% to each gen. E.g. you assume Excavator is 15% faster than Steamroller, but your numbers are divorced from reality, because Excavator is only 5% better than Steamroller.

Third, the mentioned 40% over piledriver for Zen is per core. Steamroller 15% IPC gain was only for multitreaded workloads thanks to doubling the decoder and eliminating most of the module penalty in Piledriver, You are mixing multitread numbers for modules with single-thread numbers per core. Another mistake.

Steamroller increase was only 0--5% per core. Excavator brings another 5% IPC gain. Thus Excavator is about 5--10% faster than Piledriver per core.

That 5--10% per core is very far from the improvement that AMD needs to be competitive, that is why Keller is designing Zen from scratch. Bulldozer family ends with Excavator.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Yes, Haswell is about 60% faster than Piledriver clock for clock. But this 60% is per core and only for integer workloads.

Since Zen is being designed for servers and will first appear on servers, it is good to compare server workloads between Intel and AMD

http://spec.org/cpu2006/results/res2014q3/cpu2006-20140715-30431.html
http://spec.org/cpu2006/results/res2014q3/cpu2006-20140728-30675.html

Haswell quad-core @3.4GHz scores 59.6
Steamroller quad-core @3.7GHz scores 30.1

Haswell was ~2x faster than Steamroller in this server benchmark.

About Skylake, first leaked benchmarks suggest that Skylake is 20% faster than Haswell on integer.
 
Status
Not open for further replies.