AMD CPU speculation... and expert conjecture

Page 71 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

wh3resmycar

Distinguished
really? you guys are arguing about putting quad-channel memory to improve the GPU in an APU?

why not get a 7770 instead? idle/load power consumption is strikingly good.

a fx4300+7770 will perform better than any rudimentary amdAPU setup in the next 3 years. even if it adds all those fancy memory technology while sporting similar cost. bet on it.
 

BeastLeeX

Distinguished
Dec 13, 2011
431
0
18,810


Because APU's rely on your RAM speed for data transfer. While A GPU with GDDR5 gets 144GB/s or more, a highly clocked DDR3 system with an APU might get 20-25GB/s rates.
 

jdwii

Splendid


This year they are but it would be funny if it was a bigger improvement then Intel with their Ivy to Haswell.
 


Nowhere has AMD ever suggested that, while APU's and notably Jaguar core based mobility and servers is a market AMD wish to break into, the DT market remains a big source of revenue and remains AMD's bread and butter market. What AMD have said is that their iGPU's are still in the infancy stage and targeting exponential growth.



Because the APU represents technology with a better future. Imagine if three years from now you are running a chip with a on die iGPU capable of mainstream gaming on its own and hybrid crossfire with mainstream GPU's. That will add a whole level to the gaming market.



AMD need heavy work on the IMC which is the big limitation.

This year they are but it would be funny if it was a bigger improvement then Intel with their Ivy to Haswell.

Well you saw the results, IB was merely a side grade to SB, Haswell is anywhere from 7-10% faster.


 
talking about amd is interesting and often educational.
while intel is boring.
"oooh, so you have fastest cpu in the world and sell it for $1000."
....
....
....
the discussion ends with tumbleweed bush rolling across .... and tiny insects chirping....
 


Very true, worse owning one.
 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810


Which makes me think that even Haswell is a sidegrade. Because CPU perf per generaation is crawling now, i would say upgrade after 3 generations.
As a guess, a i5-750 @ 3.5 ghz will be about 90% the speed of a 3570K.

 


If SB moved the marker about 12% over first generation core I, we sore the 2600/2700 able to beat the i7 980X and 990X hex core extremes, I would say a i5 3470 is about 20% faster than the 750, but the 750 is still very capable. I have found my old QX6850 and Athlon II x 4 655 was still capable enough to play modern titles helped by better Graphics Cards.

How I see it IB/SB is about 15-20% faster than AMD's FX and First Gen Core i. I do still expect AMD to trail in the single threaded performance but will at least close the gap. I believe AMD's top line parts will achieve Cinebench 11.5 single threaded scores around 1.3 which is just behind Intel i3 and lower end i5's by about 5% or so which is a good showing. Haswell will probably move the marker for intel but not by much.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810



Sounds like an early PS4 design. They've known for a while that their DDR3 interface isn't fast enough for high powered iGPUs. With their HSA initiative I think they would stay away from multiple integrated memory controllers. Better they just add more channels and prepare for DDR4.
 

wh3resmycar

Distinguished


3 years from now games will be more complex, high-end of today is tomorrows mainstream. APUs, due to their confined nature, will never get to "mainstream" level performance of a generation of videocard. even if they did manage to do so, i'd suspect an obnoxious asking price for it which will ultimately defeat the purpose of having an APU, like the most expensive ones in the market right now.

thing is, an hd3200/gf8200 platform was dirt cheap you can even forget about it once you put a proper discreet card in there. also flexible with the amount of cpu upgrades at hand (ahtlon to phenom). same can't be said about an fm1/fm2 setup.

ultimately, AMD made a market to steal sales away from their own market. (desktop) a board room meeting with powerpoint slides showing an increase in APU sales while entrly level CPU sales are diminishing.

sockets screwing you over.


 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


AMD may have coined the term APU but they weren't the first to have video integrated with a CPU. Intel was doing that on the mobile platform for some time. What AMD did was add more graphics power than Intel had, to try and boost sales, which it did for a little while.

AMD really didn't have a choice in the matter if they wanted to keep any part of the laptop/netbook space. And the ultrabook space rather forces you to use an APU because there isn't room for add in video cards. The platform is too thin.

Consider the alternative. If AMD hadn't come out with the APU then Intel would have taken the whole mobile market without competition.
 
For some reason I have gotten no e-mail notification of anything recently... :fou:



Four G34 sockets :D



The 4300 and 6300 are useful ways for AMD to salvage chips that for some reason don't meet specs for the 8300 series chips and would have been disposed of. There's no reason they'd just pitch them.



The HT bus on a single-socket unit doesn't carry any data from CPU to memory. It only carries information between the chipset to the CPU.

The HT bus can carry data from a *remote* CPU's memory to another CPU in a multiprocessor system but it is never a direct interface between memory and a CPU.

Current HT bus speeds aren't remotely an issue.

The reason AMD's been on this interface so long is that back when they first implemented it they designed it for future expansion. It's been through two full platform upgrades and still does everything it needs to and then some which is a testimony to how well they planned for expansion. Even something like PCIe 3.0 wouldn't require another socket as it's electrically compatible with PCIe 2.0. The only reason for a major change would be a new memory interface that requires a different socket layout and thus we're expecting the next socket to be when DDR4 is released for general use.

HT bandwidth isn't an issue at all on desktop and isn't a significant issue even in multiprocessor servers. Latency is more the issue with multiprocessor which is why there is the extensive use of half-width (8 bit) links to reduce the number of "hops" and thus greatly decrease latency in accessing remote CPUs' memory in NUMA.
 
MU, that's kind of what I said lol.

People where confusing the memory performance increase from overclocking the "NB" on the CPU and mistaking that for the HT bus link to the motherboard chipset. This is do to two reasons, first being that AMD use's NB to refer to the IMC and motherboard manufactures use it to refer to the chipset, second being that the HT reference clock is used as the base clock for everything else. Overclocking the HT reference will result in a memory performance, not from the increased HT speed but from the resulting increased NB / IMC clock.
 


http://www.guru3d.com/articles_pages/amd_a10_5800k_review_apu,15.html

These APU's are getting faster and stronger. What I found with my benches is the absence of L3 cache hasn't had a profound hit on CPU performance as most believe. The IMC in the APU's along with its L2 is faster than that on Thuban, Zambezi and Vishera. The addition of L3 would help performance by a small margin but at the cost of power and latency so the faster IMC and cache actually mitigates its performance loss.

Granted nobody buying this platform will use it with a 7970, that said we got every game in the suite playable at over 60FPS which for its price is some serious potency. The Athlon II x 4 740 represents the best value here, the 65w TPD at $55 with turbo boost of up to 500mhz and sufficient performance to frap record an entire BF3 64 man multiplayer round and maintain an average over 75FPS (min-35, avg-75,max-89) with a HD7970GE is nothing short of bang for buck.
 


AMD's best core is *probably* Piledriver. It has per-core, per-clock performance roughly on par with the Deneb core you have but there are more cores and they are more highly clocked. Whether or not you see a jump in buying an FX-8350 vs. keeping your 965BE mainly depends on if your 965BE is really a limiting factor in what you do. You won't see a huge difference in gaming but if you do video processing you will get a good bump from the extra threads in the 8350.

The most powerful x86 core out there is Intel's Ivy Bridge and it is approximately 15-20% faster clock for clock, core for core than Piledriver. However YMMV a lot since performance depends heavily on if you get a fully-functional core with HyperThreading on vs. off, how much the CPU is able to use Turbo Boost, and what exact programs you are running (e.g. how highly optimized they are for Intel CPUs or not.)

Short answer: Bang for the buck is going to be AMD's latest FX series. In general, nothing Intel has below the i7-3930K is going to be significantly faster than AMD's FX series. Intel is quite a bit faster with their 6-core chips but they charge dearly for them.
 
Status
Not open for further replies.