AMD CPU speculation... and expert conjecture

Page 562 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.



I crunched the numbers already, so read the threads. $200 USD in credit costs $16.60 over the course of twelve months. Your not purchasing a CPU for $16.60

http://www.bankrate.com/calculators/credit-cards/credit...

Balance: $200
Interest Rate: %15
Desired Months to pay off: 12
Per Month Payment: $18.05
Total Payment: $216.60

This is how you determine the costs of the system.

A) Do a projected "end build". (This is what you have determined you want within one years time) This is known as COA 1

B) Then do a "starting build". (This is what you have with cash on hand) This is known as COA 2

C) Note the the costs between them ($600 vs $400 / ect..). (This is the amount of credit you'll need)

D) Write down the costs of the additional hardware you need to upgrade the "starting build" to the "end build". (the new CPU, MB, or GPU)

E) Use the above calculator to calculate the costs of credit for just starting with the "end build", add this cost to the cost of the "end build" to create Total Cost for COA1

F) Add the costs of the "upgrade hardware" to the "starting build". This is the Total Cost for COA2

G) Now compare costs. COA1 gets you the desired system immediately, COA2 gets you the desired system in twelve months time.

If the only difference between two systems is the CPU, say you went with a $70 Pentium instead of a $200 i5, then you would need $130 in credit to just buy the i5 from the start. The costs of $130 in credit over the course of twelve months is $10.76 at 15% APR. The costs of buying that pentium then later "upgrading" to that i5 are $270. So the total value comparison is $210.76 vs $270. Not hard to see which one wins there. This also applies for dGPU's. Starting with a $100 dGPU only to "upgrade" to a $200 one in twelve months. Same math applies with the same result.

 

jdwii

Splendid


Its probably over the interest fee's and how banks screw people so badly to make money off your money
 


If you look at a 'module' from AMD- and compare it to a 'core' (with HT) from Intel. They use roughly the same amount of transistors, have a similar number of Intenger ALU's (in total) and similar FPU capabilaties (although in the latest generation Intel have seriously beefed up their FPU units, if we're talking PD then Sandy is a fairer comparison).

I agree it isn't as simple as that, however the modular aproach was to increase threads per unit area of die space, similar to Intel unsing HT.

What I have never understood however, is who in their right mind decided it was a good idea to call the FX 8XXX parts '8 core' processors. When you look at the actual resources on offer, they have the same integer capability and close to comparable FPU capability to an Intel quad core. Hell had bulldozer been released as a quad it would have got allot less stick (think about it, calling it an 8 core made it look terrible compared to Phenom II when actually it was a smaller chip! If it was called 'quad core' then it would have almost kept up with the hex core-> progress). Imo AMD had the chip, they knew the benchmarks, yet *someone* in marketing couldn't resist slapping the 8 core moniker on the chip despite it not reflecting reality.
 

szatkus

Honorable
Jul 9, 2013
382
0
10,780


He said that 4790K is like 8 modules which is pure madness. 2 modules vs 1 core with HT.

Also, Vishera has slightly more transistors than for example Sandy Bridge. And SB comprises iGPU.
 


Yeah *slightly more* mainly due to the large (and sadly rather slow) cache on bulldozer. When you look at the cores (without extrenal level 2 / level 3 cahces ) 1 module looks very similar in size to 1 HT Intel core (although as we know sadly it doesn't perform like one).

I think regarding the comparison to the 4790K the guy was assuming it was one of the larger chips with more cores rather than an uprated version of the 4770K. An '8 core' FX should be comparable to a quad core i7, I'm not disputing that :)

Do you not agree though that it was a big mistake on AMD's part to call what is essentially a quad core design '8 core' as all it does is set the chip up for some very unfair comparisons (e.g. APU vs i5, 2 modules isn't designed to compete with a monolithic quad core).
 

szatkus

Honorable
Jul 9, 2013
382
0
10,780


You're right, L3 is similar in both chips (but arranged differently), but L2 is much larger in FX-8350.

http://images.anandtech.com/reviews/cpu/intel/sandybridge/review/die.jpg
http://techreport.com/r.x/amd-fx-8350/die-shot.jpg
 

builds and parts arguments should happen at least once a year. current market situation is different from last yer, next year will be different from this year.
never hurts to learn how to smartly spend your money. :)
 

8350rocks

Distinguished
@colinp nevermind was thinking 4970k not 4790k, meh, what I get for working and posting. (They have launched the 8 core, right? Or was that next cycle?)

@jdwii no but equivalency for AMD in terms of resources would be roughly 1 intel core with HTT to 1 module if you look at floating point and cache. Now integer performance and code with less branches will come out closer to 1.6 cores to 1 module. The 8350 is priced to compete with I5 and I7 product for a reason. You will not see them comparing to the 4970k. Which is what I thought colin was talking about....
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790



AVX2 doubled the floating point throughput over AVX and AVX512 will double again.

Sandy/Ivy Bridge: 16FLOPS/core
Haswell/Broadwell: 32FLOPS/core
Skylake/Canonlake: 64FLOPS/core

Of course this is maximum possible throughput, the rest of the arch is not doubled and performance gain will be less than 2x. Haswell AVX2 did bring 15% performance gains over AVX in some consumer workloads such as x264 encoding, but did bring 70% performance gains over AVX in some professional workloads

http://www.pugetsystems.com/blog/2013/08/26/Haswell-Floating-Point-Performance-493/

The average between 15% and 70% is of 42.5%. This is why I am expecting Skylake to be about 40--50% faster than Haswell/Broadwell, as mentioned above.

About graphics, Broadwell Iris Pro is expected to bring 40% better graphics than Haswell Iris Pro. Skylake bringing a 60% over Broadwell looks possible when one check the tendency for Intel graphics

csm_Haswell_GPU_04_c3fd6eec06.jpg


Haswell introduced one more ALU/BR execution unit. I have no idea of what expect for Skylake.




Yes! Typo corrected. Thanks
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


That ~50% over Steamroller is for ordinary x86 software. When software uses the new Haswell instructions (AVX2 extensions to x86) then Haswell can bring up to 70% more IPC than Ivy Bridge and the gap with Steamroller is much much bigger. And as mentioned just above, Skylake AVX3.2 extensions will bring similar performance gain over Haswell/Broadwell.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


To be accurate, we are discussing what you are claiming that someone said you about what Keller said him... Also your claim is morphing at a fast rate: first, you said us that AMD "will be better by skylake". Next you added that "if they fall short of predictions they will be 'on par'", now you are already opening the door to 5% behind Skylake.

To match Skylake core (+- 10%), AMD would multiply the per core performance by up to a factor of eight (8)

Steamroller: 8 FLOP/core
Skylake: 64 FLOP/core,

and would beat a process disadvantage as well: 14/16nm (Glofo/TSMC) vs. Intel 10nm.

Moreover, AMD is bringing MANTLE to reduce CPU overhead. If AMD releases a CPU that is faster than anything from Intel and then brings MANTLE in final form to the arena, AMD will be competing against themselves because MANTLE would help Intel to remain competitive with an inferior CPU.

In short, this idea of AMD's 2016 x86 core outperforming Skylake on performance doesn't make sense: neither technologically, nor economically, nor strategically.
 

etayorius

Honorable
Jan 17, 2013
331
1
10,780



I agree, there is no chance AMD cant even match Skylake, even less outperform it with their next Arch... BUT, AMD does not have to beat Intel in the IPC game, it only needs to be competitive and offer better prices and AMD would be back at the game.

Skylake will probably be a good 15-25% faster than what ever AMD Throws with Keller, but hey! 25% behind is much better than being 60% behind, just as they are now.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Yes APUs did make 75% of past AMD sales, but my comment was not about sales but about revenue. It is consoles which saved AMD finances

http://techreport.com/news/25527/thanks-to-consoles-amd-posts-first-profit-in-over-a-year

In fact the consoles revenue was counted separately, as part of the semicustom division. The CPU/APU division continued losing money if you look closely at AMD financial reports.

Precisely during the very recent reorganization of AMD the unprofitable CPUs/APUs division was merged with the profitable dGPU division to balance the sheets

http://www.pcper.com/news/Editorial/AMD-Restructures-Lisa-Su-Now-COO
 

8350rocks

Distinguished


I said: Jim Keller expects them to be ahead by skylake, my source says he (conservatively) expects they will be about even by skylake, though he expected there would be within 5% difference in some workloads just because of differences in designs, and that would be fine...

You take that and dissect it into little parts as though it was all separate pieces out of context.

Stop. doing. that.

Address the entire thing, or do not address it at all. Ok?
 

8350rocks

Distinguished


So much hyperbole...

Show me where AMD is 60% behind in FPS in any game 8350 vs. 4670k or 4770k

Please, I am waiting for examples.
 

szatkus

Honorable
Jul 9, 2013
382
0
10,780


Whatever. They have them, they know how they works, when they spot something better than their own parts they could use that knowledge to improve own blocks. Ideas aren't patented.
 

GOM3RPLY3R

Honorable
Mar 16, 2013
658
0
11,010


I'm not going to say anything for either side, I'm just going to throw some links up in the air (maybe that's what I could actually do for this) :p.

http://cpuboss.com/cpus/Intel-Core-i7-4770K-vs-AMD-FX-9590

http://cpuboss.com/cpus/Intel-Core-i5-4670K-vs-AMD-FX-9590

http://cpuboss.com/cpus/Intel-Core-i5-4670K-vs-AMD-FX-8350

http://cpuboss.com/cpus/Intel-Core-i7-4770K-vs-AMD-FX-8350

http://www.anandtech.com/bench/product/697?vs=836

http://www.anandtech.com/bench/product/697?vs=837
 

szatkus

Honorable
Jul 9, 2013
382
0
10,780


Couldn't ask for anything easier :D

(it's not 4670, so you can add few FPS)
http://techreport.com/r.x/amd-fx-8350/skyrim-fps.gif
http://techreport.com/r.x/amd-fx-8350/civv-lgv.gif

There are a lot of charts like this at Google Images.
 

GOM3RPLY3R

Honorable
Mar 16, 2013
658
0
11,010


Not only would the 4670k and 4770k (probably) do much better (because they're "new" and "better"), here proves that the 3770k and 3570k did as well :lol:
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780
Oh god some of you...

AVX is useless in Windows. Hardly and programs use it and most target sse2 or lower.

Secondly, AVX doesnt increase instructions per clock. It decreases the amount of instructions to do a certain task. Which is why CISC has big advantages over RISC.

It's like comparing two ways of doing a math problem that is long. One takes a lot of shortcuts and simplifies everything. the other does every menial operation. And then you finish and look at the person who took all the shortcuts and go "wow, youre very fast at doing menial math!"

IE you dont sound like you have a clue what youre talking about.

But if you want to compare CISC like x86 to RISC like ARM, the math example is a good one. CISC is about taking shortcuts to get things done faster and RISC is about doing a lot of simple operations quickly.
 

etayorius

Honorable
Jan 17, 2013
331
1
10,780
http://www.bit-tech.net/hardware/2012/11/06/amd-fx-8350-review/6

This just one example... even ancient 2600k at stock blows the dust out of the 8350.

I plan on supporting AMD in the future, right now Piledriver and Bulldozer just does not cut the performance i need for Gaming.

AMD should had dumped Bulldozer/Piledriver just after the FX-8350, but i can understand their position... they have such a limited budget and if you don`t belive it just watch the video where Keller, Rory and Lisa are all together talking about AMD, Keller jokingly asks if he can have more budget and they reply to him, Sorry no.

It`s closing to 3 years after the bulldozer Arch and AMD still pushing with Piledriver... and as far as i am aware Piledriver will extend to 2015, i have no idea what are their plans for new Desktop CPUs but i am starting to doubt we will see something new as "piledrivers" all through 2015.

2016 is the big day, but looking at past CPUs from AMD the new arch will probably hit some where between Q4, saddly/gladly (depends on your view) the FX9590 will be top dog from AMD until the new stuff.

Oh and don`t get your hopes too high in 2016, if AMD manages to match the IPC of the CPUs available NOW, it will be a huge success! but Intel will surely improve their CPUs with Skylake... the best i expect AMD to do is to get close to performance of Skylake, that`s best case scenario... let`s be realistic, but i`m in if AMD manages this.
 
Status
Not open for further replies.