AMD CPU speculation... and expert conjecture

Page 261 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

crimson87

Honorable
Oct 6, 2012
53
0
10,630
Hi , this thread was quite a good read so far. I've heard the news that AMD is dumping their FX line of processors and will only focus on APU's. That will leave a huge gap in the market , and I do not want Intel to be the only vendor of high end consumer processors.
I just don't see the point in high end APU's: A4 and A6 are just fine for media streaming , laptops , HTPC etc. But A8 and specially A10 are overkill for streaming and too little for gaming , besides they run somewhat hot.
We just can't rely on APU's for gaming. Their processing power is not enough to drive SLI setups as well. What's gonna happen to the power user / high end gamer?? If AMD does not release any proper CPU's pc gaming will be dead. Intel will have a monopoly on the market.
 


We can't tell as of now, AMD will reveal their lineup in October IIRC. There will likely be a Piledriver FX refresh (ala Warsaw server CPU) to compete with the Haswell refresh, Steamroller FX will likely come out late into next year.
 

8350rocks

Distinguished


Something is coming...they're just being tight lipped about it.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860


stated clear d mud on a brick wall that only one tech site even posted the "rumor" by a person that's not even employed with AMD for nearly an entire year. but hey, this was stated 1.5 years ago, it must still be true today.

I don't share your entheusiasm about kaveri increasing 50+% to catch the fx-6350, especially if its on bulk, even the article you refrenced said bulk was good for 2+ghz while soi is 4+ ghz.

PS4 and xb 1 are using an 8 core design but amd is only shipping quad core DT cpus from now on? something there alone doesn't make any sense.
 

griptwister

Distinguished
Oct 7, 2012
1,437
0
19,460

Krnt

Distinguished
Dec 31, 2009
173
0
18,760
http://www.techpowerup.com/189362/only-intel-machines-affected-by-windows-8-rtc-bug.html

Seems like AMD equipped windows 8 PCs are not affected by the RTC bug.



Those are nice news but better link the complete article:
http://www.technologyreview.com/view/518426/how-to-save-the-troubled-graphene-transistor/
 

hcl123

Honorable
Mar 18, 2013
425
0
10,780


For one thing no one can master "bulk" better than Intel... and they are at 22nm finfet... yet below the 4Ghz.

AFAIK, the only 28nm SOI that Glofo has to offer, officially, is low power 28nm FD-SOI. 28nm PD-SOI, the same SOI of 32nm is a rumor, a possibility, but not likely, its EOL, and even IBM next process will be 22nm FD-SOI, even for the speedy HPC monster chips.

Almost everything before the new CEO Rory Read, could have changed direction. So makes past rumors moot to say the least. The new discipline is tight lips alright, and it shows.

The differences in speed are not only due to fab process, a good bulk process could be 30% slower compared with a good SOI process. But the BD uarch BD/PD, is not P4 has many slandered, it in fact has shorter pipeline than SNB/IB an HSW to i think... only 15 stages... and didn't started with a good SOI process. I see good possibilities of lengthen the pipeline a bit and gain some more clock prowess, what it loses for bulk it could gain from design and the differences of clock not being that notorious (small in fact).

Then FD-SOI... if a fab 2 model is essentially what AMD payed in 2012( ~700millions) to get out of Glofo positions, then "AMD FD-SOI" to be fabbed at Glofo, might include good booster tech for FD-SOI... and not saying the sky is the limit, but very encouraging thinking IBM is also on it for next nodes for high speed chips.

 

hcl123

Honorable
Mar 18, 2013
425
0
10,780


Besides a band-gap for graphene (acting like a switch) which was already solved sometime ago, that is only perhaps a better method, there are many other huge "technological walls" to overcame, before we see a chip totally based on graphene. And there is also a huge "cost($) wall" to overcame.

The is also a huge "cost($) wall" (tough lower) to overcame for III-V materials, there are already commercial chips made of Gallium Arsenide (GaAs) and Gallium Nitride (GaN)... power chips, radar microcontrolers etc... but is yet too expensive for general purpose CPUs.

When this barriers lowers we might see chips quite over the 10Ghz... its foreseen the possibility for the 7nm nodes... meanwhile for the nest 5 to 7 year will be "normal CMOS" FD-SOI and Finfet on bulk or on SOI wafers.

 
I think I may have missed it but say Kaveri's core architecture is at least 30% faster and more efficient than Vishera based cores shared on APU's and FX alike. 30% faster despite only being 4 cores plays the 8 on the 8350 should give closer to equal performance in 4 core models. That is all that will leverage advantage to a 8350 is the 4 added cores but ultimately Kaveri's 4 cores will be stronger.
 
i read (prolly on agner.org while looking into how cripple_amd works) that intel's avx was better than amd's sse5 which gained more industrywide popularity and forced amd to turn remnants of sse5 into xop. the fma4-fma3 flip-flop was intel's fault imo.

i think.. one of the more obvious issues with zambezi was turbo (and associated components). benches and reviews frequently mention top turbo clockrate but the reality was that zambezi and likely vishera didn't hit those rates all the time, unlike intel's core i5 and higher cpus which could run on higher clockrate than base clockrate. amusingly, even a lot of amd-friendly sites mention intel's base clockrate in the benches.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790



I agree with most of what you say. As you know well, I have always maintained that Piledriver FX are much better than they look under Windows due to software inefficiencies. Modern games start to show the real performance of those chips.

I only want to add two things. The first, that one would not mix the typical inefficient, outdated, and bloated windows software with linux or others. There are exceptions in each side but are rare and, as you know, AMD performs much better under linux, where it can caught Intel easily.

One poster here stated how compiling one application for his FX-8350 under linux gave him 2x the performance, surpassing his colleague's 3960x (or was a 3930?) under Windows. I repeat: 2x.

I hope HSA to show their real potential under Linux, whereas my Windows colleagues will still use many outdated software (often compiled for i386) for years. But even regarding Windows, there is hope this time things will look better for AMD. Just read this:

http://www.theinquirer.net/inquirer/opinion/2253532/amd-looks-to-hsa-foundation-to-avoid-amd64-mistakes

Relevant part is this:
AMD is pushing the HSA Foundation a year before its first HSA chip will arrive, which is now slated for the tail end of 2013, meaning there should be software that takes advantage of AMD's latest chips around launch time.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I think you have not answered my main points: bulk vs SOI? Carrizo 20nm? Q42013 vs H12014?

I don't know what you mean by "only one tech site even posted the "rumor" by a person that's not even employed with AMD for nearly an entire year."

I didn't say that kaveri will catch the FX-6350. I said that the CPU would be behind the FX-6300.

The Softpedia article (Apr 2012) claims that about frequencies. However, the PC-watch article (July 2013) claims that Glofo spent near an entire year mastering high-performance bulk for kaveri. Moreover, Glofo claims now that his high-performance bulk 28nm process can scale above the 3 GHz.

So far as I know AMD offered Sony both 4 and 8 core designs and Sony chose the 8-core. Moreover:

- AMD is replacing 8-core Opterons by 4-core Berlin.
- Warsaw will use Piledriver => there is no 6/8-core Steamroller modules.
- Kaveri comes as 2/4-core.

Does it make sense for you that AMD will release 6/8-core FX steamroller or similar chip exclusively for the desktop? Not for me.
 


Kaveri use's the same uArch that everything else does, the BD modular one. These are just successive revisions of that uArch as AMD tweaks and tries to perfect it. APU's have "four cores" because AMD is using the other half (ish) of the die to hold an iGPU, whack off that iGPU and you can insert more modules / L3 Cache. BD is a modular uArch, it's designed for just such an easy modification to happen. They can even go further and slim down the iGPU into just an overgrown vector co-processor while adding a third module (ala 6xxx series). Lots of possibilities depending on what they think the market wants.

AMD needs to be very careful here because one of the few desktop spaces their really good at / dominate in is large multi-threaded applications where someone would need to buy a significantly more expensive Intel CPU. Abandoning that segment by not producing a six ~ eight core CPU would be a very bad move. They also need to come up with a dual socket friendly implementation, extremely niche but there is a market for massive core count on the desktop. Their bread and butter is definitely in the budget four core APU market priced at $150 or less seconded by the gaming segment that is about to appear.
 




http://spectrum.ieee.org/nanoclast/semiconductors/nanotechnology/a-simple-twist-changes-graphenes-fate

The band-gap problem still exists, and graphene is loosing interest as a result. Other alternatives are also being researched, some of which are more promising, implementation wise:

http://spectrum.ieee.org/semiconductors/materials/graphene-gets-some-competition
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
Except that AMD has the data and this shows that APU sales surpass 6/8 core FX sales, which is a niche market. Moreover, everyone (Intel, AMD, Nvidia) is moving to the mobile as main market and 6/8 core CPUs make little sense here.

Add all the previous discussion about HSA paradigm, server roadmaps (no 6/8 core CPUs),... and the possibility of 6/8 core CPUs for the desktop reduces up to the highly unlikely.
 
I agree with most of what you say. As you know well, I have always maintained that Piledriver FX are much better than they look under Windows due to software inefficiencies. Modern games start to show the real performance of those chips.

Which "modern games"? Crysis, sure, but that is largely due to offloading from the GPU (which I again say was an idiotic decision that did nothing but reduce framerates overall). And what "software inefficiencies" are you referring too?

I only want to add two things. The first, that one would not mix the typical inefficient, outdated, and bloated windows software with linux or others. There are exceptions in each side but are rare and, as you know, AMD performs much better under linux, where it can caught Intel easily.

1: What about software that can be compiled for both Windows and Linux?

2: According to the last round of benchmarks I looked at, Intel maintains its lead, even on Linux, except in "naturally threaded" benchmarks (rendering, encoding, etc) that is also looses in Windows.

One poster here stated how compiling one application for his FX-8350 under linux gave him 2x the performance, surpassing his colleague's 3960x (or was a 3930?) under Windows. I repeat: 2x.

And guess what? Some applications are going to favor FX. Encoding, Rendering, and anything that works on datasets you can reduce into many small chunks will favor FX over IB. So yes, there's probably a few benchmarks out there where FX will crush IB. There's also many more where the reverse is true.

I also note: I do find it amusing how the FX-4xxx versus i5 benchmarks never seem to get noticed. The only competitive chips AMD has are the FX-6xxx and FX-8xxx chips, which are never going to be significant money makers (low sales compared to cheaper chips; OEM > Enthusiast when it comes to sales).

I hope HSA to show their real potential under Linux, whereas my Windows colleagues will still use many outdated software (often compiled for i386) for years. But even regarding Windows, there is hope this time things will look better for AMD.

Funny, even ICC compiles to SSE2 for AMD by default. And gains beyond SSE2 tend to be minimal (and sometimes, negative).

Just read this:

http://www.theinquirer.net/inquirer/opinion/2253532/amd-looks-to-hsa-foundation-to-avoid-amd64-mistakes

Relevant part is this:
AMD is pushing the HSA Foundation a year before its first HSA chip will arrive, which is now slated for the tail end of 2013, meaning there should be software that takes advantage of AMD's latest chips around launch time.

And again: Aside from a few benchmark programs sponsored by AMD, who is going to use it, when OpenCL/CUDA achieves the same exact thing, but also works for discrete GPUs when using an Intel platform? Same exact reason why I held AVX was being overblown prior to the SB launch. Until the compilers drop in the code automatically, its rare for developers to go that low level with coding outside of a handful of programs.
 

yes. amd's advantage however, is related to price point. since 6 core fx is around core i3 price now, the core i3 is a bad buy. compared to core i3, fx has solid advantage now.

mainly due to vast difference between prices.
i said earlier that fx is useless without some kind of gpu. that may have been one of the reasons. configurations of a lot of fx-based pc reflect that. for example, asus uses older 760 and 880-type chipsets (on am3+ motherboard) to build some of their their pcs, some use radeon hd 5450 and so on. fx was not oem-friendly. otoh, apus (all of them) were very oem friendly due to their integrated nature. amusingly, after i ranted on how fx is useless without discreet gfx, a 980G chipset for am3+ socket (i think a recycled 880G) showed up on amd's website. :lol:
http://www.amd.com/us/products/desktop/chipsets/9-series-integrated/Pages/amd-980g-chipset.aspx
i assume... the other reasons could be high 125w tdp (while <65w apus exist), necessity of robust cooling etc. these are more like excuses where oems couldn't cut corners than actual reasons. the biggest reason could be intel's kickbacks. :p


 


Your first statement agreed with what I said.

AMD needs to be very careful here because one of the few desktop spaces their really good at / dominate in is large multi-threaded applications where someone would need to buy a significantly more expensive Intel CPU. Abandoning that segment by not producing a six ~ eight core CPU would be a very bad move. They also need to come up with a dual socket friendly implementation, extremely niche but there is a market for massive core count on the desktop. Their bread and butter is definitely in the budget four core APU market priced at $150 or less seconded by the gaming segment that is about to appear.

Thing is, APU's instantly lose their value the moment you transition to medium sized (not Mini-ITX) desktop systems as there are cheap dGPU's that are better for the task. The iGPU on their chips takes up a very large portion of the die, in a desktop system that portion is just wasted space and their value advantage goes down the sh!tter. This is why they absolutely need at least a six core solution (three module), or hasn't anyone noticed how much value is in the fx-6xxx series. Also AMD is now trying to enter the gaming segment and being the native uArch that the two most popular next gen consoles have is a gigantic advantage. Gaming on any serious level requires a dGPU, APU's are fine for mobile or SFF box's but they fail miserably once you ratchet up the specs (their just not designed for that market segment). So AMD is going to need a CPU to pair up with a 760/770/780 or even some future 8xxx class dGPU, an iGPU would be a waste of die space and thus lost potential performance.

So while we can debate the need for a successor to the fx8350 (I personally need one), it's very clear that they will need a successor to the fx-6350 and that successor won't have much use for an iGPU.
 

8350rocks

Distinguished


This is entirely my point.

For the people who need the raw performance, a 2 module kaveri will still not replace the 8350. While a 2 module kaveri would likely push a dGPU better than any current APU by a large margin...that does not change the fact that a 2 module APU is still a suboptimal choice to pair with a high end graphics card.
 

they may have planned to make one at some point when they were deluding themselves with bd's success. i think there was a dual penta module opteron.... those 'big ones' got cancelled afaik.

edit: er... amd may be able to make a 20 core cpu. with their current uarch, it could be hard to keep power and heat under control, also yields could be low. iirc amd couldn't supply fx8150 for a long time. right now though, their focus seems to have shifted.
 

8350rocks

Distinguished


Yes, the most cores you can get in an opteron are 16 cores...
 

8350rocks

Distinguished


They already beat Intel in power consumption...

They're called Kabini and Temash, Intel has no answer for those.

As for desktops...well...

No one cares about power consumption on desktops...the difference is so negligible that it really doesn't matter. Desktops are about raw performance, and so power consumption doesn't come into play at all.

In fact...

If you are buying a PC and are concerned about the extra $0.57 per month it costs you to run an AMD, then you don't need to be buying a desktop PC at all...buy a tablet instead and see if you can get it to run Crysis 3 while it draws virtually no power. Otherwise power consumption is not a valid argument over desktop CPUs!

EDIT: Your air conditioner uses more power in 1 day than your CPU would in an entire year...do you turn off your A/C at home to save power? I bet not...if you do...kudos to you, move to a tropical climate in August and see if you can get away with that when it's 95+ degrees outside.
 

kebbz

Honorable
Jul 27, 2012
212
0
10,680


Wow good point. I got a question.
How do you calculate like, how much you gotta a month spend using the PC for 12 hours a day? I'm having a headache doing this.
 
Status
Not open for further replies.