haswell or piledriver

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Solution
If you are planning to upgrade later on, Piledriver is in an architecture AMD is committed to until 2015 (AM3+ socket) and that socket will get Steamroller and eventually Excavator, if you go with something by intel, Haswell will be their new architecture so they will commit to that for at least 2 years.

If money is at all something you're concerned about, go with AMD, for the money you would spend on an intel system, a comparably built AMD system will have more processor and more GPU for the same money...

Also, PS4 and XBOX 720 are all running AMD hardware in their consoles, so future games will be optimized for AMD architecture anyway.


I never said the mobile solution would be identical to the desktop solution, I merely said this technology would trickle down into the mobile market and further strengthen their presence in that market. Quote where I said they would be identical. You sure put a lot of words in my mouth.

As for the die shrink, that information must've changed recently, because as of the last time I read anything about it, the 28nm architecture was coming with Richland, if it's actually 32nm now, then they delayed the 28nm process to Kaveri...I can find information as recent as January 2013 stating that the coming Richland architecture was going to be a die shrink.
 


There was an article linked in the thread...maybe you didn't see it.




They give an overview of the coming features, though they do not speculate about performance



Well, based on the shader count provided by AMD (384 Richland vs. 512 for a HD 7750) that's exactly 75% of the shaders for a HD 7750. Combined with PD architecture and increases in internal data efficiency and paired with GCN, they expect 85-90% performance.



I took the shader figures from the AMD specs for the 2 independently. You can find them by googling the numbers easily.



Actually, aside from the fact that the dieshrink has been delayed, nothing I have said has been refuted.


I provided the shader information that you claim I cannot produce, go to AMD's website and look at the specs for the HD 7750, and then google the specs on the A10-6800k you should find them easily, it has the shader count listed.

I read the entire hypothesis behind the expected performance while reading on the subject, and I will have to find the link to the article, but I will post it once I encounter it again, it was a good read with some insight into Kaveri, which will likely be the first steamroller product to come to market, even ahead of the CPU itself.

I am not deluded, the fact of the matter is...the graphics on the Richland APU are not HD 7660, they are integrated HD 8670, and that is supposed to come within 10-15% of the performance of a HD 7750, which in an onboard iGPU is ALOT of performance for not a lot of money, the projected price point for the desktop APU is ~$135-140 for the A10-6800k. Which is FAR cheaper than FX-4300 and HD 7750 combined and that would be a good solution for a non-hardcore budget gamer.

I am not misleading anyone.
 
You're not taking into account the GPU cores will run at a slower clock speed and will be using much, much slower system DDR3 memory instead of onboard GDDR5 like a 7750. When it's all said and done, the GPU on their flagship APU will be about half the power of a 7750.

It simply is not possible to increase processing power 2-3 times without a large die shrink. A 7750 has more transistors than even a fx-8350. You cannot pack that GPU power into an APU without including the transistors.

You can dream all you want, but it's not possible. The point is, stop spreading your speculation off as fact to users asking for help. That is not ok.
 


Actually the GPU cores run at a comparable clock speed ~800-900 MHz, the HD 7750 doesn't have one at 1 GHz unless you go with a GHz edition of some kind...but that's not what I am talking about. You have to go to the HD 7770 to get consistent 1 GHz GPU core clocks...and again, we are not discussing that card either.

Yes, DDR5 RAM bandwidth is much better, and that bottleneck is really the one reason that the performance gains from the processor and GCN and increased internal data efficiency will not overcome the HD 7750. The supported bus speeds have been increased to support higher bandwidth DDR3 RAM though, so an APU will definitely benefit greatly from running something like 2133 MHz or 2400 MHz DDR3 RAM. The estimates for performance are assuming you utilize the maximum memory bandwidth supported, by the way...if you use only 1333 or 1600 MHz DDR3 RAM, that performance will be drastically less understandably, as you cannot cut the RAM bandwidth in half and expect the same performance from a SoC such as this one.
 








Wow, this is bad, drop the skyrim comparison because thats using the igp which is affected by bandwidth, when you move over to the dgpu, the bandwidth becomes the gpu's problem and the cpu doesn't need anywhere near as much bandwidth (which is good because amd's memory controller sucks compared to intel's).

AMD is dying in the mobile space. They are even weaker than the desktop space and only feasible on budgets of <$700. The cpu's suck (a10-4600m is comparable in multithreaded tasks to a ULV i5 ivy bridge). The only thing amd has going for them is their igp, considering that few people play games on a laptop (relative to the number of people using a laptop). i5 + 640m destroys any amd offering at the price (hybrid crossfire is buggier than regular crossfire which is buggy).

And by the way you can buy a 1080p lenovo y500 i5 with dual 650m for $900 which will eat any apu based system for breakfast. by the time you go over $700 amd starts becoming irrelevent.

AMD lost almost all their design wins with their gcn architecture in laptops (their gpu marketshare is about half of what it was two years ago). Very few laptops come with a gcn gpu (7730m or 7970m are really the only common ones) 7750m,7770m 7850m,7870m,7950m are virtually nonexistent (compared to the popularity of the 67xxm series). They still have not got their endure issues fixed. If haswell gets a 30% boost with its igp (GT2 let alone gt3) then amd is going to be done. llano vs sandy bridge was very in llano's favour but decidedly less so for hd4000 vs trinity. AMD improved trinity 19% average over llano (look at the anandtech review) and promised "gains of up to 56%" which were never seen, even in synthetics (3d mark increased by 35%--higher than any game).
Mobile trinity is virtually like a 3d mark chip, beats a 630m in 3d mark (gpu section) and then gets solidly beaten by about 30% in every game (excluding civ 5). If amd claims increase of 10-20% for richland (which is the same trinity chip with reworked bios and power optimizations--it will now be able to turbo better and the mobile versions won't throttle as badly)

AMD labs

http://www.amd.com/us/press-releases/Pages/amd_unveils_new_apus.aspx

Testing and projections develop by AMD Performance Labs. The score for the 2012 AMD A10-4600M on 3DMark 11 was 1150 and the 2012 AMD A8-4555M was 780 while the “Richland” 2013 AMD A10-5750M was 1400 and the AMD A8-5545M was 1100. PC configuration based on the “Pumori” reference design with the 2012 AMD A10-4600M with Radeon™ HD 7660G graphics, the 2012 AMD A8-4555M with AMD Radeon™ HD 7600G graphics, the 2013 AMD A10-5750M with AMD Radeon™ HD 8650G graphics and the 2013 AMD A8-5545M with AMD Radeon™ 8510G Graphics. All configurations use 4G DDR3-1600 (Dual Channel) Memory and Windows 7 Home Premium 64-bit. RIN-1

I quite hope the performance increase in actual games will be similar not the 35% 3d mark vs 19% games trinity was.
 


I do not think that 8350rocks' claim was overly speculative (and sure his claim was not lie).

Using data provided by Dr. Lisa Su about the Richland graphics being a 40% faster than Trinity I found that Richland performance will be between a HD 6670 and a HD 7750. At the other hand, I provided a link to a news site reporting that Richland graphics will be more powerful than a HD 7750. I will link again and will quote:

On the GPU front, Richland would consist of the GCN based 8000 series IGP featuring the same core count as Trinity of 384 but a new architecture which would offer superior performance than a HD 7750.

http://wccftech.com/amds-kaveri-based-28nm-richland-apu-features-steamroller-cores-compatibility-fm2-socket/

Therefore, 8350rocks' claim that Richland graphics will be near a HD 7750 cannot be a lie...

In any case, this debate is irrelevant until the final chip arrives and we can measure its real performance.
 


I have reviewed that link to overclocker.net and there is no link on that page to the claimed article you are talking about. Perhaps you are simply lying about it. Don't be lazy, provide the link.




Exactly, there is nothing that points to the Radeon HD 8760D being nearly as powerful as the Radeon HD 7750.



You should provide a link to the article which claims the 85%-90% performance with regards to the GCN.

There is more to a CPU than the number of Shaders. If you are going to make any wild estimates about hardware performance you should follow my example provide direct links to the data you use to backup your claims rather than provide links to a site that supposedly has the links to the information you are attempting to reference. Below are direct links to the specs for the Radeon HD 7750 and Radeon HD 8670D:

Radeon HD 7750:
http://www.techpowerup.com/gpudb/309/.html

Radeon HD 8670:
http://www.techpowerup.com/gpudb/2003/AMD_Radeon_HD_8670D_IGP.html

The specs are as follows:

Shading Units
HD 7750 = 512
HD 8670D = 384 (75% of HD 7750)

TMUs
HD 7750 = 32
HD 8670D = 24 (75% of HD 7750)

ROPs
HD 7750 = 16
HD 8670D = 8 (50% of HD 7750)

Compute Units
HD 7750 = 8
HD 8670D = 6 (75% of HD 7750)

Pixel Rate
HD 7750 = 12.8 GPixels
HD 8670D = 6.75 GPixels (52.73% of HD 7750)

Texture Rate
HD 7750 = 25.6 GTexels
HD 8670D = 20.6 GTexels (80.47% of HD 7750)

Floating Point
HD 7750 = 819.2 GFLOPs
HD 8670D = 648.19 GFLOPs (79.12% of HD 7750)

Core Clockspeed
HD 7750 = 800MHz
HD 8670D = 844MHz (105.5% of HD 7750)

Note that only the Radeon HD 8670D's clockspeed is marginally better than the Radeon HD 7750. Everything else is significantly lower than the Radeon HD 7750. That is likely why the author of the following thread (which you provided the link) stated the Radeon HD 8670D is like provide a 20% - 40% increase in performance over the Radeon HD 6670. Even a 40% increase means the HD 8670D is still significantly slower than a Radeon HD 7750. And I'll state again that I have no issues with his assessment with the Radeon HD 8670D's the probably performance.

http://wccftech.com/amd-launching-28nm-kaveri-apu-steamroller-cores-2013/




See my reply above. You cannot simply rely on the number of shaders. There's more to a graphic core than just shaders.



I say otherwise. My posts (which I have at backup with actual links to data to support my opinion) basically demonstrates that you claim of the Radeon HD 8670D is nearly equal to the Radeon HD 7750 is baseless and is likely to be intentionally misleading as well.

I have no problems if you believe it in your own opinion the Radeon HD 8670D is nearly equal to the Radeon HD 7750. But you simply keep preaching it like it is a fact without any real support to justify your claim. That is something I do not tolerate at all. You are being very deceptive and misleading.




See my replies above since. There is no need for me to simply restate the bulk of what I have already stated.

 


The author of that article posted a more recent article (3 months old vs. 9 months old), where he states ....

While Richland would deliver 20-40% improvement over Trinity A-Series APU, Kaveri APU would pump upto even more and almost twice the gain than Richland itself.

I have posted performance charts (and the link) above for the Radeon HD 7660D from Anandtech. As can be clearly seen the Radeon HD 7660D is only slightly more powerful than the Radeon HD 5570. Now if a 45% (+5% since the HD 5570 is weaker than the HD 7660D) increase in performance above the Radeon HD 5570 is nearly equal to a Radeon HD 7750 then either the Radeon HD 5570 is more powerful than all the benchmarks have shown or the Radeon HD 7750 is much weaker than all the benchmarks have shown.

Either way you look at it:

Radeon HD 7660D + 40% <> Radeon HD 7750 (<> means "does not equal" in mathematical terms in case you are not aware).


-- EDIT --

Damn, forgot to post the article link.....

http://wccftech.com/amd-launching-28nm-kaveri-apu-steamroller-cores-2013/
 


You don't need to convince me. I already did the estimation above. Look at where I wrote "I found that Richland performance will be between a HD 6670 and a HD 7750". Or if you prefer it in more mathematical terms:

HD 6670 < Richland < HD 7750.

But again this is based in the supposition that Richland will be a 40% faster than Trinity. I don't know if that 40% includes the effects from the new Turbo metric architecture (which is thermal-based instead of TPD-based) and/or driver optimizations.
 


This assessment is probably pretty close, that's about what my thinking was...

My personal assessment is that it will likely end up being about 80% of the performance of a HD 7750, but I have found others expecting the iGPU to be even closer to the HD 7750

https://www.eteknix.com/richland-apu-to-get-28nm-graphics-upgrade/

http://www.overclock.net/t/1347709/amd-richland-a10-6800k-apu-thread/20

The only benchmark I can find is listed *here, and it's for a mobile version of Richland, so the desktop results will, obviously, be different:

http://www.notebookcheck.net/AMD-Radeon-HD-8650G.87916.0.html

*Note that's not an equivalent to a top of the line desktop product (though it is in notebooks) and the onboard graphics noted are HD 8650G not HD 8670D...though they rate the performance better than a GTX630M in the same comparative segment. There's only one benchmark listed there so far.

They also rate it slightly ahead of the HD 7670M...which means it would fall about where we were talking about, ahead of the HD 6670, but just shy of the HD 7750 in a desktop if the performance remains comparable.

 
I expect that richland desktop will be a minor gain for graphics. The same basic chip (on the same node) with bios and power optimizations does not matter too much on the desktop because the chips are already reaching the highest turbo. I think richland increases the max turbo frequency of the gpu core about 44 mhz on 800 mhz so expect a 6% improvement vs a non throttling a10-5800k. Obviously the biggest increase will be in mobile because trinity in mobile throttles pretty bad (a10-4600k throttles to 2.2 ghz running prime 95 only--no stress on the gpu). Obviously this can be seen in how amd claims of higher performance gains are for the lower powered chips and the lower claims are for their top end chips.

Again note that trinity saw a huge increase of 35% in 3d mark 11 vs llano (for mobile) yet only got an average of 19% in games.

The notebookcheck result is from the amd labs release in my previous post. Its a manufacturer claim, so again take with a grain of salt.

Either way, richland is not going to get close to 7750 because it doesn't have the memory bandwidth. I don't think 2133 mhz ram is the answer, it costs about $75 for 8 gb which is more than a $89.99 7750 with a $20 rebate =$70. Amd's better bet would be to improve their memory controller rather than requiring the consumer to buy better ram to get more out of their apu.

mem%20sandra.png


sandra%20memory.png


This is why amd needs higher RAM speeds.
 


Well I did not write a specific performance value because there are so many uncertainties, but using the data given at CES, then my own estimation is that Richland will about a 74% of the performance of a HD 7750, which is compatible with your 80%.
 


I do not know what do you mean by "close" but using last data available at CES gives up to a 70-80% of the performance of the 7750.



Upgrading to faster ram must be a non-issue but buying faster ram in a new build is the way to go.



No.

First. Toms reported they had problems with the motherboard/memory used in that test and could not boot with some RAM modules.

Second. They report there a maximum of 14GB/s but others report more than 15GB/s

A5800KOMEM2133.png


Third. You would not trust Sandra test very much. Going from 800 to 1866 gives a 65% of increase on theoretical memory bandwidth In Sandra. However, the situation with real games is very different. Even Tom found up to a 95% of increase in performance with faster ram

mem%20wow%20average.png


mem%20wow%20min.png


Please explains to us how the memory controller manages to double games performance if it so 'bad'. What kind of improvement you would wait from an improved controller? Triple gaming performance? 🙂
 


No they are not gcn. Look at new articles from the last month.

http://techreport.com/news/24277/leaked-richland-specs-reveal-higher-clock-speeds

Don't get too excited about the 8000-series model numbers for the integrated Radeons. Richland's GPU is based on the same Radeon HD 6000-series foundation as Trinity's integrated graphics. GPU clock speeds are a little higher in Richland, but the deltas are less than 100MHz.

If the published specifications are accurate, desktop-bound versions of Richland have received a nice boost in CPU clock speeds. The base clocks are 300MHz higher than in equivalent Trinity processors, and Turbo speeds are up 200-400MHz. Those increases don't appear to have affected Richland's TDP, which is still either 65W or 100W, depending on the model. The unlocked K-series chips are the only ones with 100W TDPs.

Trinity isn't the most compelling desktop product, but Richland should at least be an improvement. The new APU only needs to tide us over until later this year, when Kaveri is expected to arrive with updated integrated graphics based on the current GCN architecture. Kaveri will be fabbed on a 28-nm process, a shrink from the 32-nm node used to manufacture Trinity and Richland APUs.

http://www.extremetech.com/computing/150451-amds-new-richland-apu-boosts-clocks-and-adds-features-but-its-a-just-modest-refresh

Pay no attention to the “HD 8000″ moniker attached to these parts. Richland’s GPU is based on the same Cayman-derived chip that AMD launched in 2011 as the HD 6970 and in 2012 in Trinity-based APUs. AMD, however, isn’t just claiming that Richland is a bit faster than Trinity — it promises that the new platform draws less overall power as well.

http://www.theinquirer.net/inquirer/news/2253866/amd-announces-richland-laptop-apus

AMD brands the GPU core as a Radeon HD 8000 series design but the firm confirmed to The INQUIRER that the Richland GPU core is not built on its GCN architecture, so following its own marketing it should really be branded within the Radeon HD 6000 series.

http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/60147-detailing-richland-s-dual-graphics-gcn-compatibility.html

While Trinity’s use of the Northern Islands VLIW4 architecture represented a dramatic step forward from Llano’s Redwood-based “Sumo” design, some were expecting this year’s APUs to feature AMD’s latest GCN cores. That hasn’t happened yet since Richland uses the same layout as its predecessor, though with faster clock speeds and higher performance numbers.
 

One of the sites you listed:

http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/60147-detailing-richland-s-dual-graphics-gcn-compatibility.html

This article says that Richland will not carry the GCN cores onboard, but it will be able to Xfire with GCN cards to effectively double the performance of onboard GPU. Trinity was uncapable of that...hence the rebranding of the graphics, HD 8670D.

The article also states that "the A10 comes with 384 cores and when combined with the higher clock speeds granted by the Richland refresh, allows the IGP’s performance to roughly equal that of GCN-based discrete chips." The first GCN card out? HD 7790...hmm...isn't that interesting.
 


With a discrete gpu, there is next to no benefit (outside of 1-3% margin of error) with RAM over 1600 cas 9.

I don't know why you are quibbling over 14 GB/s vs 15 GB/s. There is basically no difference and both are substantially less than ivy bridge's 20+ GB/s. I see in your pictures that some (perhaps all, not sure here) are using 2133 mhz ram (the cpu is also overclocked). The benchmarks i posted show both cpu's using 1600 mhz ram and indicated how much theoretical throughput trinity loses vs ivy bridge. Obviously as you put faster ram in throughput goes up.

You do realize that the cpu is also using the ram? Thats why at low bandwidth's the game crawls to a halt. There is just not enough bandwidth for both the cpu and gpu; adding more bandwidth massively helps the gpu while the cpu doesn't really need more (this can be seen in games where going from a discrete 7750 to SLI titan's doesn't really need much more bandwidth to the cpu).

I see a difference of 65.84/36.60 = 80% increase in average frame rate. Minimum frame rate will tank hard if bandwidth is limited.

You are looking at it wrong. The memory controller is not managing to double performance, but rather it is managing to hold performance back. If amd had intel's memory controller they would likely not need 2133 mhz ram to get full potential out of their igp.
 


??? 7790 is gcn + power optimizations (better to call it gcn 1.1) , 7750 is also gcn and was launched more than a year ago.

We are looking at drivers

http://forums.anandtech.com/showthread.php?t=2310600

You realize that hybrid crossfire with trinity is VLIW 4 + VLIW 5. Crossfire between different architectures is difficult but not impossible. Its hard to crossfire between gcn and VLIW4, especially when the weakest gcn chip (7730m) was still significantly stronger than the 7660G (mobile). Now they are releasing gcn chips with fewer cores (384 vs 512) at lower speeds and with a smaller memory bus (64 bit). So its not just a matter of richland getting better but gcn going to smaller, weaker chips. (And i bet hybrid crossfire is significantly easier when the chips have the same number of shader units and ROP's).

That has nothing to do with the fact that richland is not GCN so stop nitpicking.

And everyone rebrands cards. Amd has to from a marketing/ sales perspective.
 


No, it was not possible through AMD supported channels to Xfire those...the program to Xfire the cards would not allow you to do it...maybe others out there would...but it was not supported by AMD. It is now supported by AMD...you could Xfire a 6670, but that wasn't really that useful...

Further, AMD is not releasing lower shader count cards...7790 is still 512 shaders, and is positioned well between 7770 and 7850...it has fewer shaders than 7850 but similar processor speed and slightly lower bandwidth (128 bit not 192 bit).

Further, the article above was talking about 8XXX series cards coming soon as well and possibilities for Xfire with those was the other thing AMD was pointedly talking about.
 
Integrated graphics is not a substitute for a discrete graphics card. Gamers are much, much better off installing a GPU than they are using onboard -- regardless of how close to a 7750 Richland APUs will be (posters in this thread are being VERY optimistic). APUs are only going to come into play for gamers in the sub $800-ish market for laptops. Big whoop. People in that market are likely buying laptops as mobile backups to their existing gaming PCs or college students who don't realize they can buy both a better performing entry-level gaming PC and a random laptop for less total money anyway. Desktop gamers will STILL be much better off buying a dual core Pentium and a $100 video card than they would be buying a $150 APU. Once we start talking about i3s and i5s and +$200 video cards, there's not even a comparison to APUs.

And the person who mentioned 8350s are more efficient than i5s at idle... I don't even care enough to dispute that claim, but seriously who cares? Put a load on both and see what happens to your power usage. There's a reason one part is 125W TDP and the other is 77W.

I can truly say I'm as unbiased as they come -- I buy what makes sense and have no brand loyalty. I recommend to my personal friends parts I think will be best for them putting my reputation on the line. I own parts from both major CPU makers and both major GPU makers. Right now, desktop gamers get the best bang for their buck at most budgets buying Intel CPUs and AMD GPUs. Next year one of both or those choices may change, but RIGHT NOW, that's the way it is.

People who come into the forums proclaiming buying fx8350s for gaming or gtx670s are either uneducated or biased, it's as simple as that. Will your games work? Sure, but you could have gotten more performance for the same money buying something else.
 


I agree that APUs are not a substitute for a discrete GPU + CPU combo...but for being what it is...Richland's iGPU is impressive.



You make a valid point, though 80-90% of an average CPU life is spent at idle or just above it, I do not disagree...underload...the FX series is more power hungry.



I wouldn't say they get the most bang for their buck at MOST budgets...it would be more like...if you're spending about $1200+ on a PC...intel makes sense for performance, because at that point you don't have to sacrifice performance in other arenas, and if you have a huge budget for a PC...by all means, buy whatever you want. I would buy AMD anyway, personally, to support further competition between the 2...but I openly admit, not everyone thinks or feels that way.



I disagree with the sentiment that the FX8350 will not run any game well enough for MOST gamers, half or more of them don't get more than 60 FPS on most games...so the difference between 111 and 109-110 is moot...they'll never realize the performance gain anyway...and the FX8350 will not bottleneck any GPU on the market...so you'd be as well off with that as you would most anything else.

 
Of course an fx8350 won't bottleneck a gpu. That is not what I'm trying to convey.

The point is, an i5-3470 does the same thing for the same money using a lot less power, and also excels in workloads that use 4 or fewer threads -- using about 60% of the electricity. It also doesn't rely on high-speed memory and low-end boards are cheaper.

As a consumer, I can't see any conceivable reason for me to buy an FX chip right now. They have a long way to go before being competitive again like they were with their Athon and Phenom IIs.