haswell or piledriver

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Solution
If you are planning to upgrade later on, Piledriver is in an architecture AMD is committed to until 2015 (AM3+ socket) and that socket will get Steamroller and eventually Excavator, if you go with something by intel, Haswell will be their new architecture so they will commit to that for at least 2 years.

If money is at all something you're concerned about, go with AMD, for the money you would spend on an intel system, a comparably built AMD system will have more processor and more GPU for the same money...

Also, PS4 and XBOX 720 are all running AMD hardware in their consoles, so future games will be optimized for AMD architecture anyway.


The crossfire thing was my point. If some third party tech whiz can get it to work amd can get to work also, they just havn't done so for some other reason. I don't know about desktop trinity but mobile versions (essentially higher binned chips running at lower clocks) can crossfire with the 384 core mars (?) chip.

Amd has said nothing about the 7790, and really, their igp performs worse than a 7750 so why would they crossfire with a 7790? The fact remains that richland is essentially the trinity core with power optimizations, a 6% boost in igp speed is going to translate into a 6% increase in gpu performance.

I don't know if you are trolling or just sadly misinformed but you are really wrong with your facts. 7790 is 896 shaders and the 7850 is 256 bit bus. The mars chip (geared primarily toward mobile is 384 shaders)
 


No. Benefit is why the FX-8350 becomes at stock speed of 1866, with users reporting gaming benefits from using >1866 memory.



No. I said more than 15. The points I made are those which you ignored. Toms reported hardware problems with their AMD setup, including inability to boot with certain memory configurations. This looks as a defective motherboard or modules or some BIOS problem. Moreover, I posted additional doubts about the values obtained by their test. Precisely, the test I linked above reports a 11% larger bandwidth for the A10. I also doubt of the "20+" for the i3 sandy bridge, specially since that the claimed theoretical limit is 21. Here you can see Llanos, trinities and sandy bridge i3 compared using a more modern version of the Sandra and the AMD chips are on pair with the i3

sisoft-mem.png




No. the test was run at 1866 (zoom the image) and the >15 value in Sandra agrees with many other independent tests using the same memory speed. The benchmark you posted above are for sandy not ivi, and as I show trinity is very close to an i3 when using a more modern version of Sandra.



And a 95% on minFPS, that is why I wrote "up to a 95%". Increasing minFPS is more important than increasing average values, because offers a more fluid experience.



You avoided my two questions.
 


I meant the 7750 is 512 shaders that was a typo and an honest mistake...

The 7790 is still positioned between 7770 and 7850, even if I missed the bus speed by 64 bits...(Sorry, I was looking up an answer on a Nvidia question at the same time and I guess my wires got crossed, after looking I realized the nvidia is 192 bit...)
 


Integrated graphics are a substitute for discrete graphics in the same range of performance and completely needed on mobile space.

Evidently current APUs are competing in the low-range of discrete graphics but Richland and substitutes will be competing on the higher range.

You also miss that next consoles such as the PS4 will be using a APU, unless you are part of that non-biased crew who believes that gamers don't play on consoles.



They are buying fx8350s gaming pcs after reading reviews, Toms cpu graphics hierarchy chart, and the opinions of owners of fx8350, who are 99.99% satisfied.

Moreover, people uses the FX-8350 beyond gaming: video editing, CAD, DNA sequence analysis...
 


Except that the FX is faster when consumes more power

http://openbenchmarking.org/embed.php?i=1210227-RA-AMDFX835085&sha=0c966a4&p=2

http://openbenchmarking.org/embed.php?i=1210227-RA-AMDFX835085&sha=6dc05fb&p=2

http://openbenchmarking.org/embed.php?i=1210227-RA-AMDFX835085&sha=b799806&p=2

http://openbenchmarking.org/embed.php?i=1210227-RA-AMDFX835085&sha=faec63f&p=2

http://openbenchmarking.org/embed.php?i=1210227-RA-AMDFX835085&sha=293f200&p=2

http://openbenchmarking.org/embed.php?i=1210227-RA-AMDFX835085&sha=f236ffc&p=2
 
None of those are gaming benchmarks, and therefore I don't care enough to disect them when we're talking about what CPU is better for gaming purposes. My point stands.

You guys can dispute it all you want, but I expect RELEVANT benchmarks to back up your claims that run contrary to nearly every PC hardware review site. The number of people using their CPU to crack passwords is going to be far less than the number used to play video games, which is already a small segment of the usage population. If we talk about what's better for everyday users, Intel wins hands down because they use a lot less power and produce less heat. And if we are talking about cracking passwords, why would you use your CPU when GPUs do it much better anyway? There's a reason super computers are crammed with tons more GPUs than CPUs, and it's because GPUs are better at parallel computing by the nature of what they're supposed to do.
 


Stop with the ram business, I'm using 1600 mhz ram because its cheaper, I know bulldozer/trinity supports 1866 mhz ram but its irrelevent because if you use faster ram, the ivy bridge controller also gets more bandwidth and the delta remains roughly the same. The fact remains that using faster ram costs more and tilts the balance away from an apu and toward a cpu + discrete gpu. If you look at a scientific article you will see that in all experiments there is a dependant and independant variable (IM ONLY ALLOWED TO VARY ONE THING IN AN EXPERIMENT AND IN THIS CASE ITS THE CPU, IF I CHANGE THE RAM BETWEEN THE TWO THEN MY EXPERIMENT BECOMES INVALID). When we are looking at the MEMORY CONTROLLER we must use the same ram to isolate memory controller performance. You can't perform a fair engine analysis if the two engines are using different gasolines.

The fact remains that amd has a poor memory controller and needs faster ram to make up for it. 15 is still less than ~20. You say that amd's memory controller is good because it shows good scaling? That's different than what I'm trying to say which is in terms of theoretical GB/s (this is not sandra but rather what 1600 mhz or other speed RAM is capable of doing; theoretical bandwidth is ~25.6 GB/s for 1600 mhz ram). Operating at a lower efficiency will not change the scaling results as long as bandwidth is needed, the only difference is that instead of starting at a lower value and working up the graph we would start at a higher value-- in other words you would get better performance for a given ram speed and so could get the same performance using 1600 mhz ram instead of 2133 mhz ram. When we are looking at the MEMORY CONTROLLER we must use the same ram to isolate memory controller performance. You can't perform a fair engine analysis if the two engines are using different gasolines. Despite being only validated for 1600 mhz ram you can use 1866 mhz ram for ivy bridge.

sandra%20memory.png


Quote from the article

No surprise—Intel’s dual-channel memory controller dramatically outperforms AMD’s best effort.

That same article also shows the 3770k which gets 24+ GB/s (its ivy bridge not sandy bridge). The FX series get better bandwidth than trinity which is unusual because trinity needs it just as much (possibly related to l3 cache [and controller?] which trinity does not have).

sisoft-memr.png


Despite being only validated for 1600 mhz ram you can use 1866 mhz ram for ivy bridge. This is an older version of sandra but shows my point.

mem%20scaling%20sandra.png


Weird how bandwidth goes from 13.27 to 21 = 58% increase when speed increased by 1600/1066= 50%
 
Trinity is not listed in your Sandra benchmarks there...on Zambezi and Vishera.

Other independent benchmarks show that AMD chips do gain a bigger % increase in performance than intel does by increasing memory bandwidth...that would have nothing to do with a memory controller, because theoretically, then intel would increase performance more % with a gain in bandwidth up to their maximum supported memory (which is either 2133 or 2400), if their memory controller was so much more efficient.

While you can have more than 1 set of variables in an experiment, you need to have a control setup to be effective and know what you're against naturally. So what's your control? The MoBo AND CPU are completely different architecture in any AMD vs Intel benchmark you run...what if you got a bad board for one and not the other, and it wasn't running to full potential? Since an AMD won't fit in an intel socket or vice versa...you cannot completely eliminate all but 1 variable. In fact, in PC benchmarks, you usually have a minimum of 3-4 variables, the people running the benches just don't tell you...

For example:

They don't rebuild every intel PC every time with the same PSU from the AMD machine they test side by side with it...

So, according to you every benchmark uses the EXACT same hardware, for both tests...however, they do not...they use the same MODEL of hardware and the same BRAND, so the comparison is VALID...though scientific...it is not. These parts are never the same on any comparison you see in a benchmark among 2 separate CPU manufacturers, and miraculously, no web sites claim that they are, they just claim the SKU# or Brand/Model are the same down to the specific model:

(1) PSU
(2) CPU
(3) MoBo
(4) HDD

You were talking about 1 set of variables...there's 4 right there....we're not even discussing the rest of the hardware...

Side by side comparisons typically do not share all the same components...call it unscientific if you want....ok...it's not scientific...no one claims they are. Except, a bunch of guys running around singing praises about how intel is so great because xyz benchmark says 3 FPS MOAR!!!! RAR!!!

Really? I mean, come on...really? 75%+ of the people on these forums have a 60 Hz monitor...they cannot possibly even realize the difference in performance between 97 and 99 FPS....so stop acting like that matters any at all for someone building a $500 budget gaming rig!

BTW, you can't even build a $500 budget i5-3570k build that's worth having at all...the CPU and MoBo eat up $360, leaving you enough money for a dirt cheap psu, and a HDD...maybe...if you go dirt cheap. No case, no GPU, no sound card, no RAM...

Wow! That $500 budget gaming i5-3570k looks like it's really about $1000 budget gaming build isn't it? It is if it's worth spending the money on an i5 at all. Otherwise you're better off saving $50 on the CPU and another $20 or so on the board and buying a better GPU...I can build a budget FX8350 rig under $600.00 that is worth having...(as long as you supply peripherals...assumed as well on intel build). You can't build an i5-3570k for under $600 with a graphics card that is worth it at all...

So let's see...using pcpartpicker how these builds look:

http://pcpartpicker.com/p/NsNw

CPU - AMD FX8350
MB - MSI 970A-G46
RAM - Corsair 4 GB DDR3 1600 MHz
HDD - Seagate 1 TB 7200 RPM 32 MB Cache
GPU - PowerColor Radeon HD 7770 GHz edition 1 GB DDR5
CASE - Zalman Z9
PSU - COOLERMASTER GX 450 W 80 PLUS Bronze
CDD - Asus 24/48x DVD/CD burner

$517.86 Total including shipping.

For Intel same components only MSI equivalent intel board (G45...AMD board is G46) and i5-3570k

CPU - Intel i5-3570k
MB - MSI Z77A-G45

http://pcpartpicker.com/p/NsNw

$609.87

Now, I don't know about you...but nothing intel makes is worth spending over $90 more to get vs an FX-8350...for any reason...the performance difference is not $90 worth, and no one...no matter what they show me...will prove otherwise to me. For the difference in cost between the 2 builds, I would much rather have an 8350 + HD 7850 2 GB then an i5-3570k and HD 7770 GHz Edition 1 GB.

Would anyone argue this with me? At this kind of budget level, the GPU is VERY much important.

Don't anybody come in here and start giving me rubbish about how you could step down on the chip to get the GPU now and all that, either...this is an apples to apples, performance equivalent (more or less) build with the same components that could be used for both...to show the cost difference.

When you're building a $1000 gaming rig...$100 isn't that big a deal...if someone only has $500 in their pocket, coming up with another $100 is going to be harder.

So, I ask you to keep in mind who you're talking to when giving advice, because with a 60Hz monitor like 90% of these people have...between those 2 machines I built...they won't even be able to see the $90 difference anyway.
 


I agree with most of what you are saying. Amd leads in performance /$, especially with the 6300 (amazing deal for budget build). I realize there are many variables in benchmarking but the goal is to try and eliminate them as much as possible, obviously we are not doing that with different speed ram. There is very little difference in the hdd, psu affecting gaming performance (or any performance in general because as long as we have enough power the system should be fine and most tasks--games, encoding are not I/O bound in the benchmarks--these are done with a SSD), motherboard usually doesn't affect anything other power consumption, boot times, possibly overclocking. This is called the margin of error and is something that I feel that review sites do not put enough emphasis on (many say that they take the average of three rounds, they should be putting error bars to indicate this). You probably need at least 5-10% margin of error to take into account these factors. I believe that review sites take care not to publish benchmarks with a bad board.

The msi board for the 8320 is not supported for the 8350 (I don't know if a 8320 will be a problem but possibly) (Note: your cpu does not match the item in the basket) Personally I wouldn't risk it as overclocking the 8320 is going to have the same problem. I would recommend a 990 series chipset too. There are cheaper z 77 boards too. (Note: multiple reviews seem to suggest that the board you suggested cannot deliver enough power to the chip).

i5-3570k is $219 vs $179 (on newegg there is a $20 between the two -$40 if you include the gift card). If you are near a microcenter the i5 is almost the same price ($189). While amd boards are generally cheaper, the 8350 sucks a lot of power and is generally going to require a higher quality board (just not something that people are going to feel confortable running on a budget board) and PSU.

6300 vs i5 is fairly in the amd camp and a great recommendation (with your build id swap the 8350 for a 6300 and get a 7870 if possible).

I agree that on a budget amd is very very good, however, if you have enough to get the i5 then go for the i5 every time.

Edit: Forgot the first part. Yes, amd is going to show a bigger difference in things that require memory bandwidth because they have less of it so it becomes more important. My graphs there were just to show that ivy bridge routinely gets over 20 GB/s at 1600 mhz ram

Trinity scales with bandwidth but not as good as ivy bridge.

SandraMemBand.png


mem%20sandra.png


Unlike intel, doubling the speed does not double the bandwidth.
 
I am not saying that those solutions specifically are good ones, I just picked parts that would either be compatible both ways, and in the case of MoBos, I picked a budget board that the models were easily identifiable with their twin on each side...

My personal recommendation would be a more expensive board for both, but to hit $500.00 I had to go as budget as I could, even on the MoBo.
 


1) There are many windows gaming benchmark there out. The FX-3850 is on pair with an i7-3770K in some current games and does poor in others. Most of those windows games are poorly threaded and use only about a 50% or less of the real potential of the FX chip.

2) Toms has a cpu gaming hierarchy chart where the FX-3850 is placed near the top, whereas emphasize that you will not notice overall gaming performance under windows from upgrading to an expensive i7-3770K. Claiming that the FX is bad for gaming only reflect your bias.

3) Above openbenchmarks show the real performance of the FX chips because they use the eigth cores of the chip. They show how the FX-3850 can be faster than a 17-3770K.

4) Next consoles will be using eight-cores chips from AMD, therefore wait an entire generation of new games optimized for eight-cores. The eight-cores FX-3850 will run those new games very very fast.
 




1) It is not irrelevant that bulldozer/trinity supports 1866 mhz ram because the chips are designed for using that memory speed at stock. More about this below.

2) Yes you can run 1866 memory on your but that mean overclocking. I wonder how many people finds fair to compare a AMD chip at stock speed with a overcloked Intel processor just to pretend their "Intel is better" mantra.

3) The prize differences between memory modules are small: For instance 1866 memory is priced 70€, whereas 2133 memory is at 78€ in my country. You cannot but any discrete gpu with 8€ ($10).

4) The fact that an apu gives better prize performance ratio is the reason why next consoles will using AMD apus.

5) Your methodology is seriously flawed and far from being scientific. I already discussed this before and give three examples of scientific tests: one of memory, other of cpus of same architecture, and a third of cpus of different architectures.

Your car example is good because reflect your misunderstanding. Some of you pretend that a comparison between a 85 octane engine and a 91 octane engine would be made with the same gasoline. It is near so ridiculous as pretending comparing a diesel engine with a gasoline engine using the same fuel on both.

6) The 15-17GHz of llano-trinity are very close to the 18GHz of i3 sandy Bridge. And the 21GHz of Piledriver are very close to the 24GHz of the ivy Bridge i7-3770K. Your cannot pretend to compare AMD chips at stock speeds with overclocked Intel chips.

7) You repeat the same Sandra benchmark figure adding a quote comment from the article, but you fail to notice that they had troubles with the amd motherboard and the modules used in that comparison (failure to boot, unability to select certain memory configurations on the AMD part). More modern tests give more reliable results and show that trinity/llano is on pair with sandy bridge i3. I gave those: look at them.

8) Your interpretation of the memory speed effect on the i7-3770k fails to consider two important points. First both trinity and FX-chips are more sensible to memory bandwith than the i7. You cannot pretend that the i7 has a better memory controller whereas showing poor scalings than AMD chips.

What you call weird effect is not that weird. The i7 is designed for a stock speed of 1600. By underclocking it to using 1333 or slower memory you are not scaling the entire design/architecture and by this reason you obtain poor performance than you waited in terms of losing memory cycles. This is related to why you do not obtain 33% more performance by using 33% memory overclock, except that the effects underclocking vs overcloking are complementary as correspond to moving below or above the stock speed for which the chip was designed.

In any case, this proves one of my points in this thread. Many AMD-Intel comparisons on the Internet are biased, because among other deficiencies, compare Intel chips at stock speeds against AMD chips with underclocked memory.
 


You are nitpicking over the smallest things.

Because AMD is rated for up to 1866 mhz ram does not mean that the apu "NEEDS" 1866 ram. Furthermore, these apu's are based for budget systems. With a $130 apu (this is the most expensive one) no one is going to spend more money than they have to on ram. You can buy an ivy bridge (not sandy) with cheap 1333 mhz ram and it will not affect performance. However, it will affect performance of an apu. There is rather a significant difference of ~$20 between 8 GB of 1333 mhz and 8 GB of 2133 or 1866 ram. This is what I mean by irrelevent. How many trinity users are using 1866 mhz ram over 1600 mhz ram? I'm betting that every single store bought system uses 1600 mhz ram (probably lower) and a good many of the non-enthusiasts do to. Really, amd has a bad memory controller, just like their FX series has cache problems. If they fix this their chips will improve. I don't see why you are arguing this, you yourself state that this is so.

"Repeat my intel is better mantra" --really now I will say that for budget gaming the a10-5800k is a good cpu + igp and one of the best you can get at the price point. The FX 6300 is a really good cpu and on a budget is preferable to a i5 (cost-wise). Intel does have massive leads in perf/watt. thats just how it is.

The reason amd chips scale better with bandwidth is because they have less of it. If you have enough then there won't be any difference in performance when changing the speed (faster).

"Your car example is good because reflect your misunderstanding. Some of you pretend that a comparison between a 85 octane engine and a 91 octane engine would be made with the same gasoline. It is near so ridiculous as pretending comparing a diesel engine with a gasoline engine using the same fuel on both."

Really I knew you were going to say this. Stop nitpicking. You try to eliminate differences in an investigation. A diesel engine would be a POWER7 or ARM chip in this case. The idea is sound. Two "similar" (i don't think you get this part) engines when tested will be tested under the same circumstances. If one requires a different operating environment then that counts against it (If engine one requires more expensive gas but performs better than engine two both have to be taken into account, better performance at a higher price. If however, with the less expensive gas, engine one is worse than engine two then this also has to be taken into account.) Please don't look for stupid holes in this. The general idea is there and if you have a problem please point it out against the GENERAL IDEA.

Can you please show me that awesome memory scaling of trinity's please? Cause even from 1600 mhz to 2133 mhz ivy bridge shows good scaling (about 30% increase in bandwidth for 33% increase in frequency).

The only thing I can find (I havn't really looked) for trinity is this and it doesn't look very good.

SandraMemBand.png





For jaunrga

And the next gen consoles are using weak as crap 1.6 ghz jaguar cores (probably one will be reserved for the OS and possiby for move/kinect?). Any quad core sandy or ivy is not going to have any problems running that. You think game developers are going to allow a huge section of their market (they people with really high end systems) to run these games poorly. No. Probably when they do the PC port they will stick two of the jaguar threads into one thread for the pc and you will end up with a 3-4 core game (cause few have slow 8 core cpus and running a game that uses a huge number of threads on a dual core part can actually slow down performance--and devs don't want to screw over potential customers). An x86 core is an x86 core, the only way games can favour amd (to any significant extent) is if they are specifically geared for the module FX architecture (which jaguar isn't). The gpu though will be different and should show a strong bias toward radeon.


Both companies could improve. Both have areas that need fixing. Intel needs to get a better igp (drivers) and to show a better improvement per generation. AMD needs to fix its perf/watt (cause its losing horrendously in mobile if you ignore igp, in terms of performance). AMD can do this by fixing memory controller (cause its the crux of the apu) and in the FX series the cache performance. Intel needs to lower prices (or at least offer more performance per $). I'm not for either company but I'll judge the merits of each. There is nothing wrong with saying that AMD has a great product for its price but it could be better if they fixed some things.
 


1) What you say about HDDs is not at all correct. Precisely there are some reviews of trinity based recent desktop pcs from a known brand that performed very bad (poor than expected) in many benchmarks. Biased sites are blaming on AMD again, but one of the sites found the problem. The pcs are shipped with a slow 5400rpm notebook HDD! Once the HDD was substituted the pc started to benchmark just as waited. The site finalized recommending people not to buy the full pc, but one of the barebones and then incorporating a fast 7200rpm desktop HDD or an SDD.

2) There exists review sites who use defective boards or select a board that they know will perform poor on a given test when comparing with the brand that they promote. Google "biased hardware review".

3) On your comparison of ivy Bridge i7 and trinity A10 memory subsystems, you are comparing two chips targeted to completely different price/user ranges. Sandra is a synthetic benchmark which does not reflect real-world performance, specially on AMD chips.

mem%20scaling%20skyrim.png


SkyrimLow.png


Thus the AMD a10-5800k beats the Intel i7-3770k by near a 5x factor... It is possible find people claiming about a 20% increase in game performance using faster memory on AMD FX chips...
 


HDD is not going to affect anything other than game loading time. Going from a notebook with a 680m or 7970m and a HDD will not notice a difference in fps by going to a SSD. Besides, if both systems use the same model HDD I think we can both agree that the systems are pretty much as identical as they can be (which is what I think your point is). As long as the components are the same make and model the review is the same (really any difference is minor and in margin of error).

I don't see the 5x factor anywhere. Using the 1866 mhz ram (because anything faster is really pointless and cost prohibitive). hd 4000 = 30.88 fps, a10= 47.1 fps 47.1/30.88 =~53% faster.

Sorry about the graphs, its hard to find the right ones. I think however that the memory controller of ivy bridge is fairly standard across the range.

Here is the pentium g850 memory bandwidth at 1333 mhz. (current non-overclocked pc)

Sandra-Memory-Bandwidth.png


Its the same (? bit better) as the i7 at 1333 mhz. But definitely in the same range.

mem%20scaling%20sandra.png
 


1) You must believe that memory speeds are selected during the design phase of a new chip after launching a coin, but not. Intel choose a 1600 memory speed because that fits better their design. AMD chooses a different speed because that better fits their design.

Evidently you can use another speeds if you want. Nobody is impeding you to run an i7-3770K on 800MHZ ram but...

2) In my country 1600 ram is less expensive than 1333... and going from 8GB 1600 to 8Gb 2133 is completely inexpensive: $5. Therefore you continue doing big issues where there is none...

3) It is untrue that amd chips have bad bandwidth and thus your conclusion is wrong. Llano/ trinity have same bandwidth in Sisoft Sandra 2013 than a sandy bridge i3. And FX-8350 has near the same bandwidth than ivy bridge i7.

4) You continue ignoring what I said about how one compares two cpus of the same architecture (probably this is what correspond to what you mean by "similar", but who knows) and what I said about how one compares two cpus of different architectures.

5) Unfortunately the synthetic test you quote does not reproduce real world performance. In my other answer I show non-synthetic benchmarks comparing trinity to ivy Bridge.

6) I do not understand all the fanboys' hate against Sony and Microsoft. Maybe this is because they chose AMD. At the recent GDC 2013 (Game Developers Conference) Sony has given more info about the PS4 and explained how what you call "crap 1.6 ghz jaguar cores":

will be able to produce a higher quality of games than some high-end systems

7) Some fanboys must be worried because their high-end four-core PCs will be unable to play those games, but it seems evident that the eight-thread games developed for the console will be ported to eight-core PCs, unless you believe again that Sony and AMD chose an eight-core APU after launching a coin. And, of course, you are ignoring that AMD has already announced that they will be selling a version of the console APU for the PC market.

It seems evident that an eight-core FX will benefit from the existence of native eight-thread games. Precisely the problem of the FX with current four-thread games is that are only using about a 50% of the chip.
 


1) No. HDDs also affect framerate in some games. WoW is often quoted as example. Slow HDDs also cause stuttering.

2) Intel scaling: 4%. AMD scaling: 19%. AMD/Intel factor: 19/4 ~= 5. The AMD chip offers about a 5x better scaling thanks to its memory subsystem.
 


No, this is wrong. Intel has a much weaker igp. Weaker Igp needs less bandwidth. Intel has slightly more bandwidth. Its like the 7970 vs 680. 680 shows weak gains when overclocking if you don't overclock the memory because it is memory limited. 7970 has much more memory bandwidth so it shows better scaling when overclocking memory only. Think of the limiting reactant in a chemical equation (ie if you are baking cakes and each cake takes 4 cups of flour (shader power) and 4 eggs (bandwidth) and you have 20 cups of flour but only 4 eggs how many cakes can you bake? Only one. Amd is limited by bandwidth (eggs) and so as you give the apu more bandwidth it gives better performance (more cakes). Intel has a lot of eggs but little flour so giving them more eggs doesn't really help because they need more flour).


Also, sorry, I have no clue which country you are in but in canada the cheapest 8 GB 1333 mhz kit is $42 (Team elite--don't know this brand but whatever). The cheapest 1600 mhz ram is $46 (team xtreme). The cheapest 1866 mhz kit is $59. The cheapest 2133 mhz ram is $59. Its a difference of around $17 (the 2133 ram is on sale, yesterday it was $75).
Its a small amount but for a budget build, depending on the person, may be significant.


I don't know how many times I have to post a memory bandwidth graph but you yourself have said that sandy i3 gets around 17 GB/s (and I have provided the graph) with 1333 mhz ram. Trinity gets around 15-16 with 1866 ram. Amd has a poorer memory subsystem.

" Some fanboys must be worried because their high-end four-core PCs will be unable to play those games, but it seems evident that the eight-thread games developed for the console will be ported to eight-core PCs, unless you believe again that Sony and AMD chose an eight-core APU after launching a coin. And, of course, you are ignoring that AMD has already announced that they will be selling a version of the console APU for the PC market."

Really. Its a 1.6 ghz jaguar core. Its not that powerful. Frequency wise any 4 core 3.2 ghz cpu is going to be able to keep up, especially when you consider jaguar has lower ipc than ivy bridge/sandy bridge (and the fact that one core will probably be reserved for the OS, sony has stated that teh machine is capable of background tasks and you will be able to pause the game and do other things). I highly doubt that devs will try to screw over people with high end systems because they are the ones most likely to buy the games. How many 8 core pcs are there? Very few. Devs to not try can screw over their customers because then then lose money. I don't 'hate' sony and microsoft and think that the jaguar chip was probably their best bet. I don't agree with the fact that everyone is going to need 8 cores because the jaguar chip is, not matter how you put it, a fairly weak chip. jaguar is also a different architecture than piledriver so specific optimizations (module architecture) will not be carried forward there to the FX series. Furthermore, I expect devs will take a while to get used to the architecture (like the xbox 360) and the first round of games will not be extracting all the performance from the hardware (though this process will go faster than the xbox or ps3 because it is x86). By that time new cpus will already be out. The console version of the APU is severely cut down, amd has said this themselves.


"It seems evident that an eight-core FX will benefit from the existence of native eight-thread games. Precisely the problem of the FX with current four-thread games is that are only using about a 50% of the chip."

Agree completely. What I'm trying to say is that a port from next gen console to FX is going to load the 8 cores (probably 6-7 not 8 because one for OS possibly one for kinect) at less than 40% each. The devs are going to say to themselves "Why not just double this thing up and load 4 cores at 80%? Its better for our target market." This is what they will do.

Can you please show me a graph with trinity keeping up with ivy bridge at the same memory speed in memory bandwidth?
 


Sorry, but your comments come off as fanboyish. For general usage and gaming, I highly doubt you will see much difference on your power bill. You would have to be doing something like F@H 24/7 to see any significant difference. Most of the time, other than when gaming, the CPU isn't going to be working much. CPU does little during general web browsing. Yes it uses more power at load. It uses a little under 40w more than an i7 920 at a heavy load. At idle, the power difference is roughly 15w more than a 3770k. Most standard light bulbs are 40w or higher. I don't recall a 40w bulb with only nightly use ever making an increase in my power bill.

If spending is tight, it makes more sense to go with an AMD platform and better GPU for a gaming rig. I have a 3570k and it is a great chip. I would probably have went with an FX 6300 had Vishera been out when I bought it. Shopping at Microcenter, I could have gotten a combo of FX6300 and MSI 970A-G46 for half the cost of what I spent on my i5 combo.
 


Agree, the difference between a 8350 and i50-3570 is probably going to affect your yearly electricity bill for the majority of users as much as the difference between one CFL and a standard incandescent light.

 
"Some fanboys must be worried because their high-end four-core PCs will be unable to play those games..."

There is no way you typed that with a straight face.

You can call me a fanboy all you want, but my 2-year-old i5-2500k + cf 5850s is not going to struggle with any "next-gen" console port running at the same settings. These consoles aren't going to revolutionize anything.

If slow 4-core / 8-thread CPUs are so much better than fast 4-core / 4-thread CPUs, why doesn't AMD just make even slower 16 or 32-thread CPUs? Because it doesn't work that way.

You can't say 8-threads on chip A are better than 4-threads on a different-architecture chip B any more than you can say 3.5GHz is always faster than 3.0GHz. There is much more that goes into CPU speed than thread counts and clock speed.

And to clarify, AMD is not making 8-CORE chips any more than Intel's i7s are 8-cores. They are making 4-core / 8-thread chips. If a "core" is sharing logic resources with another "core", it's not an individual core -- it's a processing thread.
 


1) No. After doing wrong claims about AMD memory controller, you asked:

Can you please show me that awesome memory scaling of trinity's please? [...]
The only thing I can find (I havn't really looked) for trinity is this and it doesn't look very good.

Then you spout a synthetic test showing only about a 10% memory scaling, pretending that was the real scaling. As showed to you, real world performance is about 20%, you cannot feed 20% more data to the igp if the memory controller is limited to giving only a 10% more bandwidth. What happens is that the synthetic test does not represent real-world performance. I said this to you. Toms says this to you as well:

Sandra attempts to capture theoretical performance, rather than real-world differences.

2) In your country things can be different, but are not representative for the rest of the planet. Moreover, I gave you prices using the same brand (G.Skill) and same series (ripjaws).

3) They used the same "2 x 4GB GSkill DDR-3 1866 @ 9.10.9.27" for both the trinity and the i3. Moreover, they used a z77 motherboard for the i3, wait some additional gain from that... In any case this is rather irrelevant per point (1).

4) Except that the jaguar cores will be no working in empty space but in real world, surrounded by an adequate architecture/technology that exploits all their potential. Just some few days ago Sony said at GDC 2013 that this jaguar-core console will play higher quality games that some high-end PCs.

Some Intel owners don't like those kind of news, but don't kill the messenger!

5) It seems evident that the new eight-core games for the console will be ported to eight-cores PCs, specially when a version of the console APU will be sold on the PC market. Optimization for eight-cores does not imply optimization for a specific architecture as piledriver or jaguar.

You also ignore that Sony is selling PlayStation 4 development kits based on an eight-core AMD FX-series. Developers will be not using a 8-core FX PC to obtain a 8-core console game and then porting that back to 4-core intel PCs, maybe some do but you would not wait that to be the rule...

6) Maybe you don't need an eight-core Pc but other do. There are lots of eight-core PCs there out and will be many more. And devs will be developing for those (in fact they are already doing it now).
 


AMD makes eight physical cores chips. Intel makes 4 physical cores and 4 virtual cores. The physical cores give true scalability, unlike the virtual cores used by Intel:

corescalability3.jpg
 


The synthetic test is good to use because it shows the maximum practical bandwidth (assuming sandra works which it does). In actual tasks the bandwidth is always less than or greater to. So the i3 may have less bandwidth in practice but the a10 will never have more. Give me a graph that shows trinity memory scaling. The pentium I linked gets 17 GB/s using 1333 mhz ram.

Sony can say whatever they want but its vapourware until we see it. yes ps4 will compete with probably gtx 570 level computers (cause of the gpu). And yes they would have devs code on a 8 core FX platform because their chip isn't finished yet (probably is now but when they gave the dev kits out the chips were still in progress, the same thing happened with the xbox 360 gpu and wii u gpu).

Jaguar is 15% IPC increase over bobcat. Bobcat is terrifically slow. Jaguar is completely different from FX series (the core size is much smaller).

34103.png


Its going to be about five times that speed. (E-350 is two bobcat cores + IPC boost). A 3570k is about four times faster (11.3 sec) on the same test, making it about 70% faster.

http://www.anandtech.com/bench/Product/328?vs=701

Multiply the e-350 by five. The thing is pretty slow no matter how you cut it.

Yeah, no one is going to be developing for 8 cores anytime soon for the majority of games. How many games are still dual core? (And the xbox 360 cpu is tri-core).

Things in my country arn't representative of the rest of the world but neither are they in yours.

Im not sure where you are getting that ram speed but here http://www.tomshardware.com/reviews/a10-5800k-a8-5600k-trinity-apu,3241-2.html

G.Skill 16 GB (4 x 4 GB) DDR3-1600, F3-12800CL9Q2-32GBZL @ 9-9-9-24 and 1.5 V

sandra%20memory.png


There are no motherboard problems in this review. I'd say margin of error is around +/- 5%. And no trinity scales almost linearly in games for two reasons. It needs more bandwidth (much stronger than hd4000) and it gets less bandwidth not because amd has a great memory controller.
 


1) The synthetic test attempts to measure theoretical performance not real-world performance. In any case trinity has a bandwidth similar to i3 (sandy) and FX-8350 has a bandwidth similar to i7 (ivy) in that synthetic test as I showed to you before.

2) What a Sony representative, with name and surname, says at Game Developers Conference 2013 is more realistic than what an anonymous poster (who have no access to the real thing) writes at a forum.

3) The Photoshop figure that you link is from a preview done on a ES platform that used a lower clocked DDR3 memory that worked at 1066Mhz.

4) The AMD chip used in the PS4 is an innovative unified design. You cannot compare it with an older design as the i5-3570k in the way that you are doing it. Precisely this point was also remarked in the recent GDC 2013:

It's not just about an x86 solution, but it's about that Jaguar APU where it's a combination of the graphics and CPU together and being able to create something that's greater than just putting an x86 PC-like architecture together

[...]

For us, really by looking at that APU that we designed, you can't pull out individual components off it and hold it up and say, 'Yeah, this compares to X or Y.'

It's that integration of the two, and especially with the amount of shared memory [8GB of GDDR5, 176GB/s raw memory bandwidth] that Sony has chosen to put on that machine, then you're going to be able to do so much more moving and sharing that data that you can address by both sides.

5) Nobody said that eight-cores games will be ready in two months, but it is evident that eight cores games will be developed for both the PS4 and the eight-cores PCs.

Your comparison with the old PC gaming ecosystem is not valid, because that ecosystem was not targeted towards the eight-core PCs.

The old Xbox used a tri-core cpu based in the Cell chip cores and Cell was a complete nightmare for game developers. E.g. quake developers blamed Microsoft for their choice of the hardware on the Xbox 360. Those days, quake developers are praising Sony for their choice of an AMD eight-cores APU.

6) The "2 x 4GB GSkill DDR-3 1866 @ 9.10.9.27" where used for both the trinity and the i3 in the Sandra sp5 test that gives virtually the same memory bandwith for both chips. The memory modules that you cite now where used in an older software version in another comparison, one where the AMD trinity was run with underclocked memory.

Moreover, the figure that you link contains trinity/llano data which was copied and pasted from a previous preview

sandra%20memory%20bandwidth.png


which was done with a motherboard could not boot at some speeds and with memory modules not accepting some manual configurations. I already said this to you.

In any case this is rather irrelevant per previous comments made about the relationship between this synthetic benchmark and real-word performance where trinity can up to twice the number of FPS with faster ram.