AMD CPU speculation... and expert conjecture

Page 354 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


It is marketing 101 that you would not promote one product at the expense of another of your products, unless you are planning to abandon those products (aka FX-6350 and FX-8350). Why do you believe usual benchmarketing uses products from the competence?



Evidently the chart is not from AMD. It is a chart made from supposed leaked benchmarks and they can mean lots of things. They can also mean that improvements in front-end + L1 cache + integer execution units + memory controller make that Steamroller without L3 cache to be significantly faster than FX with L3 cache.

What I find interesting is that the leaked increase in performance falls in the range that I mentioned in my article [20% -- 40%].



This is another leaked benchmark. It is not an article and nobody is going to give test configs :sarcastic:

I don't know what you mean by "memory bandwidth being incorrectly mentioned".
 

they're not doing that. considering a10 6790's price and performance, both 6 and 8 core fx will remain at their places.
please don't make assumptions (and wrong ones) about my belief.

i don't know about how the bw were measured, but iris pro (gt3e) has edram's peak bw of 100 GB/s aggregate bw, peak ~25 GB/s ddr3 bw(ddr3 1600). 8670d bws are higher (in reality) as well. cryptography bench never mentions what kind of algorithm was running. for example, both core i5 and the a10 apus have aes hardware acceleration enabled.
 


In their talk they were saying to OEMs that the new APU gives >90% of the FX-8350 performance at one fraction of the cost. That is what the slide #13 says. That doesn't look as promoting the FX line. The FX-4350 didn't even was mentioned by AMD.

I see two options either (i) AMD will be abandoning the FX-4000 and refreshing the FX-6000/8000/9000 a la Warsaw or (ii) AMD will drop the entire FX line completely and focus on an APU line only.

I am not assuming anything, I asked you a specific question.



You don't have to worry about absolute numbers. Different tests and different setups give different absolute values. Richland with 1600MHz gives rather more than 7.6Gb/s in Sandra for instance. What you have to see are the relative values i.e., how each processor compares to another. Kaveri bandwidth was ~100% Richland bandwidth and ~50% more than Haswell Iris Pro. If that was legit.

Finally, AES extensions are irrelevant. The cryptographic code was run in the GPUs. That is why you see the names of the iGPUs mentioned for each value and not the names of the CPUs.
 

talk? there's an audio version? i didn't listen to that. if amd said that the new apu(6790k) gives >90% of fx8350's performance, they were intentionally being vague (>90% of what? what tasks?) to pitch 6790k. benchmarketing at play - largely irrelevant, since independent reviews will reflect the real world perf/price.

i just mentioned a couple of post ago that they were promoting 6790k (i.e. not fx). that's what the apu part of slideshow says, mostly filled with advertising than with facts.

the slide never mentioned exact fps, how did you deduce the numbers needed to calculate the percentage? those just show 'greater than' bars.
how did you measure this 'fraction of the cost' from slide #13? the apu has $130 msrp iirc, fx8350 sells for $200 - that's way more than a "fraction". the bf4 bench was done with a discreet card, so the apu's igpu is not a factor.
above all, since when a single, very gpu bound, bf4 single player is representative of "new APU gives >90% of the FX-8350 performance at one fraction of the cost"(even though it's clearly miscalculated and misinterpreted)?

you have misinterpreted 'slide #13'. you're reading too much into it exacerbated by your own misinterpretation.


you see those options based on a single promo slide from an apu promo slideshow? even if the cpus get phased out, your base is just wrong.
as for answer to your question: you don't know what i believe.

problem is - the relative values are useless if the baseline is wrong and unclear. kaveri cannot simply match iris pro's edram bw without it's own embedded memory, regular dual channel ddr3 won't give 100 GB/s bw. that's why i question it's legitimacy. hopefully, more credible benches will come out as the real launch gradually gets closer.

the reason i mentioned aes acceleration was because of hsa/huma. may be it didn't work that way. anyway, the crypto bench doesn't line up with gpu-accelerated figures that i've seen elsewhere with current gpus.
 


iceclock, that's a name I haven't seen in a while!

And yeah... I too am hoping for that...
 


This is my commentary on this:

1.) Slide #13 is marketing hype under an isolated circumstance in a very controlled environment. There are already some applications where the A10-5800K trinity is >90% performance of any FX CPU, specifically those that are not well threaded and don't end up with very many mispredicted code branches. That slide is as valuable to us, as pyrite is to a gold miner...(i.e. not at all important, and likely misleading)

2.) The second portion has no system specs, and uses parts downclocked massively from FX, if I take anything useful away from it, the fact that they're not comparing stock clocked parts is a bit disconcerting. I feel that bodes poorly for what future clockspeeds will be. Not saying Kaveri will be 1.8 GHz, though I am curious as to why not compare them @ 4.0 GHz? Likely because there will more than certainly not be any 4.0 GHz Kaveri parts. Which means a 20% performance gain per clock would be nearly entirely negated by a 13% clockspeed advantage if Kaveri parts ship @ 3.5 GHz as you claim they currently have ES's for. (As a side note, why no comparisons using the higher clocked ES's if they're available?)

3.) Notice the massive gains are in financial analysis where the impact of HSA could feasibly be felt. However, the iGPU for Kaveri is pretty impressive, we already knew that. My concern here is the fact that the only areas shown are areas with GPU acceleration possible. I think there have not been any significant CPU compute tests revealed because it won't be the world beater you're saying. Sure, if performance goes up by ~20-40% per clock, that's a good architecture tweak. However, if the clockspeeds are confined to the low/mid 3 GHz range, you're losing all that architecture advantage because you're giving up clockspeed by nearly the same margin.

I still see AMD offering FX type parts...maybe not right this moment...though I don't see any way that steamroller in APU format is going to leverage the raw horsepower that FX can.
 


^^ bolded the relevant part.



And my reply was that usually one doesn't promote one product at the expense of another from the same company, unless the former product is going to replace/substitute the second.



Measuring the bars and using the axis scale.



Evidently the cost is not obtained from slide #13. Spending $70 less for missing only a 8% of performance @ 1080p agrees with my concept of a 'fraction of the cost'.

I was assuming the iGPU was used together the dGPU. If it was not used then this move the performance per $ on the APU side.



In the first place, when I presented the slide by the first time. I said that the point was not to discuss if the slide was representative or not, because the fact that AMD had made that slide was much more important. We can discuss if the situation represented in that slide is the general case or only a special case, but this is unimportant. The important is that the slide reflects AMD intentions.

Nobody makes a slide as that and present them to OEMs attending a talk if the goal is to sell lots of FX-6350 and 8350. The slide was clearly made with the goal of emphasizing the APU and deemphasizing the FX line. Now take your own opinion about that.



No. I didn't get that from the slide. I already discussed options (i) and (ii) before the slide appeared. That slide agrees with the options.



The baseline is unimportant, because they are ES tests and the absolute scores don't agree with reviews of final silicon. Pay attention to the benchmarks of the CPU that I gave before. All them were run @1.8Ghz. Therefore you cannot take the absolute score obtained by Kaveri and compare to review articles of FX-8350 @4GHz. This makes no sense. What makes sense is if you have compare the score obtained by Kaveri to the score obtained by the ES Piledriver FX @ 1.8GHz. Then you obtain the percentages Kaveri was 30% faster than Piledriver clock for clock (if the leaked benchmark was legit).

Moreover, the bandwidth that eDRAM can give is unrelated to the effective bandwidth obtained in a real test that doesn't fit in the eDRAM. There is one reason why ordinary reviews of Iris Pro did only test low resolutions.



The GP cryptographic test is coded in OpenCL and CUDA and run on the GPU. No AES no HSA. Same answer than above about absolute values doesn't matter.
 


1) We could agree on if the slide #13 is marketing hype under an isolated circumstance in a very controlled environment. But this again misses the main point. Why AMD did such one slide. It seems evident for me that slide is revealing AMD plans about the future of the FX line. Nobody would offer that slide to a bunch of OEMs if the goal was ot sell millions of 6350 and 8350.

2) This is a leaked benchmark of ES. There is no complete system specs, of course. It is not using "parts downclocked massively from FX". The benchmark is comparing FX bulldozer ES, FX piledriver ES, and Kaveri ES. The 1.8GHz is a common ES freq. from what I can see it in other leaked benchmarks.

It makes little sense to make comparisons of near production ES with near final frequencies. Not only that ruin marketing but also relations with news/reviews sites.

3) The impact could be related to single vs double performance or just to another specific aspects of the code. I don't know.
 


Well, I see it like this:

1.) OEMs don't typically even build with FX, boutique builders normally do, and those aren't companies with names like HP/Dell/Lenovo/etc. OEMs typically use AMD APUs in budget rigs where they can skimp on everything. Selling more of those is a good thing for AMD, but not the segment that FX typically fills.

2.) I can agree about the commonality of that frequency for ES's. However, something higher clocked would have been better...I know for a fact that there are already benches for ES's @ 2.4 GHz for both uarches. Why not compare those instead?

3.) Indeed...I would say that the last tidbit (3rd image) about iGPU relative performance is likely anecdotal at best without software specifics.
 

orly. the slide only shows some red bars and some x-axis fps value (but never how many fps each bar represents) and a few cpu and apu names. nothing else.

there was no expense mentioned, and absolute zero mention of any replacement and/or substitution. i read the whole slideshow, all the way down to the disclaimer slide where they say that they might represent misinformation.

i call b.s. on this. show the exact calculations.

8%? of what? is that even a real measurement? or did you grab that off of thin air? show the actual calculations.
your 'concept'(in reality, the lack of it and reality..) is baseless from the begining. the percentage number is wrong, and you are basing overall performance/price on a single (i.e. the narrowest sample it can be) data point which is a gpu bound offline single player benchmark where cpu performance is not even a factor And the igpu, the biggest draw, is rendered irrelevant due to the use of a powerful gaming discreet card.

no, by default, the igpu disables itself whenever a discreet, non-'amd dual gfx-able' card is used. since you pay extra for the igpu, making it useless(i.e. disabled) only Decreases the apu's perf/price in the bf4 bench.

and amd's intention is to get oems interested in 6790k, which is new, late in the refesh cycle (i.e. a silicon dump), poor price/perf. that's why benchmarketing fluff was needed.
Nothing Else.

if you understand that amd's goal was to promote the apu then why are you even arguing? you got that part right. what you got utterly and absolutely wrong was that amd's goal was to sell 'lots of' fx6350 and fx8350 at the 6790k introduction event. this bit is self-explanatory.

did you read what you just wrote? "compare e.s. kaveri to e.s. pd" what the heck? i never compared the posted kaveri benchmark scores to retail fx8350 scores. comparing kaveri vs pd e.s. benchmarks is even less logical let alone at 1.8 ghz. i was discussing both the kaveri numbers and fx numbers from that bench.

the edram in iris pro acts as an L4 cache, it's far from being unrelated.

you mentioned earlier that no info other than the chart was given out. how did you know that the gp crypto test is coded in opencl and cuda? can't opencl and cuda run aes algorithm? if hsa, the biggest feature of kaveri, was not even part of that bench - makes the bench irrelevent.
 


are you serious?! please tell me you're not messing around.
 
"Fraction of the cost" is a useless marketing term. You know, 1000000/1 is a fraction too. Product X is a fraction of the cost of product Y but product X cost 1000000 times more. It means absolutely nothing.

 
Off topic somewhat:

http://www.tomshardware.com/reviews/geforce-gtx-780-ti-review-benchmarks,3663-18.html

Read this review yet?

Could this guy be anymore NVidia fanboy?

NVidia's GTX 780Ti allows the fans to spin at higher RPMs generating more noise, but that's the price you pay for more performance.

That's for a card that runs at 48 dB at full load! Yet, in the 290X review, the 55 dB at full load (a mere 7 dBs difference) was "intolerably loud, we could only run it in quiet mode"???

WTF! 55 dB is about the level of a normal conversation, 48 dB isn't much quieter than that at all...it's not like we're talking the difference between 48 and 70 dB or something absurd. There wasn't any coil whine or vibration from the 290X and it was "intolerably loud"?

That's completely BS!

We'll see what happens when the 290X GHz Toxic Edition comes...

/rant
 



My friend just a brilliant idea 😛 Card distrubutors should strap reference PCBs with coolers intended for the 280X or surplus ones for the 7970GHz and buy time to develop a "Revision 2" with updated VRMs, PCB, etc. From what I know, isn't a 3db increase theoretically "2x louder"? http://www.youtube.com/watch?v=cpbbuaIA3Ds
 
For reference, world record speaker sound levels are in the 150+ dB range.

Turbofan aircraft engine ~110 dB
typical rock concert ~100 dB
lawn mower engine ~90 dB (loud lawnmower)
R9-290X full load ~55 dB
normal conversation ~50-52 dB
GTX 780Ti full load ~48 dB

EDIT: http://www.sengpielaudio.com/calculator-levelchange.htm

Actually 10 dB increase is ~2x loudness

Meaning that the 290x is approximately 70% louder...however, we are still talking about a sound level equivalent to the mild hum of a moderately busy restaurant.

Not that bad...GTX 480 anyone? ...Nobody?
 
Yes, Db scale is not linear, it's exponential.

So 2 or even 3 Db do make a difference.

Anyway, I don't think the GTX780ti is a value winner, so I don't really think it will do fine. Specially with 3GB being a "premium class" video card. It will suck for big screens and multiscreens.

Cheers!
 
@The Q6660 Inside, you need to check your PMs haha!

@8350rocks, the worst review on the 290X was the one I saw was from Tiny Tom Logan from OC3D. He's a total jerk. Something just didn't add up in his benchmarks, all his scores were worse than the other reviewers using the same set ups.
 
Still not a good value, 290 + Monster OC with waterblock or an Accelero Xtreme for me. Dosen't the 480 give you memories of good times, mates? IMO, the 465/470 and 480 were BO$$ cards, both the 5800 series and them hauled @$$. The whole "DB" debate is overused though, I am sneaking into an nVidiot's house and installing 4000RPM 120mm Delta fans on all his WC rads as we speak.. SHHHHHHHH.

@griptwiser, He just does not seem to like AMD products that much, it was as if he really wanted users to keep going 780 TBH. inb4290noisesuxguisebuya4GB770instead
 


The smartest thing to do right now is to buy a cheap XSPC get, waterblock for GPU, and 290.

Unfortunately most people will go Intel CPU + Nvidia GPU + Corsair babby's first water cooling.

Meh, it's whatever. AMD still has a massive marketing hurdle to climb and Nvidia is still bullying them.

First Nvidia sits on FCAT and frame time for years, and then they decide to make a big deal out of it before 7990 releases, making 7990 a complete failure.

Then, Nvidia pays off OriginPC. Now OriginPC is getting new GPUs sooner than other companies.

Now, Nvidia makes a big deal about thermal throttling with GPU boost, a technology THEY invented in the first place.

This is just basically what has happened THIS YEAR to AMD's GPU division. I think that AMD is smart enough to realize that even if they released a chip that was faster than 3930k in multi-thread, a little slower than i5 in single, and sold for $200 that it would STILL get smeared and there'd be some sort of massive FUD campaign. And it wouldn't really affect Intel sales at all.

290 and 290x are still phenomenal values for the money and they're getting blasted because the default cooler is too loud?

And just an fyi, people are slapping $40 coolers on their 290xs and not breaking 65c and having something a lot quieter. AMD's stock blower design is horrible.

These are all the types of problems that would plague AMD even if they released a really good HEDT CPU tomorrow. Intel would do something to make AMD's win useless.
 

The Arctic Twin Turbo and a 290 = win. I would do a loop, but I don't really see it fitting for my already tapped out 920@3.7. Once the distrubutors get non ref 290s in, nVidia will get its PP$ @$$ handed to them. But of course, there will be the nvidiots running Swiftec H220s/ H100is with Dual reference 780tis and a 4930K on a Rampage IV-E and in a $500 Caselabs...
 


1) Here one HP computer with a FX chip

http://www.amazon.co.uk/Pavilion-HPE-h8m-FX-6350-DDR3-1600Mhz/dp/B00DQ1KXKM

Here gaming PCs from lenovo

http://shop.lenovo.com/us/en/desktops/erazer/x-series/

They don't use FX, but use i7 paired with GeForce or Radeon card.

And then are the small OEMs, those without big names. I know lot of them in my own country. Most of them sell FX chips pre-installed in desktops. Other only use Intel.

2) I don't know why 1.8GHz and 600MHz are so 'popular' for ES, but repeating the CPU tests at 2.4GHz is not going to change things significantly.

 
Status
Not open for further replies.