haswell or piledriver

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Solution
If you are planning to upgrade later on, Piledriver is in an architecture AMD is committed to until 2015 (AM3+ socket) and that socket will get Steamroller and eventually Excavator, if you go with something by intel, Haswell will be their new architecture so they will commit to that for at least 2 years.

If money is at all something you're concerned about, go with AMD, for the money you would spend on an intel system, a comparably built AMD system will have more processor and more GPU for the same money...

Also, PS4 and XBOX 720 are all running AMD hardware in their consoles, so future games will be optimized for AMD architecture anyway.


I'm not BASHING AMD. I'm providing CORRECT information to people who ask questions. I like AMD just fine -- I even prefer their GPUs over Nvidia's -- their CPUs are just not as good for gaming as similarly priced Intel CPUs and you saying otherwise is misleading.

Tek syndicate is not a valid review site because their results run contrary to everyone else. Dudes are obvious fanboys who don't run objective tests.

If you can provide benchmarks where AMD "destroys" Intel in gaming CPU reviews, please do so. Until you do, stop spreading lies.
 


I don't see any of these benchmarks that Overclock.net has "a lot of". Would you provide some links please? Intel Core i5's usually beat out AMD chips in gaming by a significant margin. In heavily threaded applications AMD usually wins out against i5's. Everything I've read goes along with what hapkido says, but I'm interested in reading reviews that prove otherwise.
 
@8350 again you say things like AMD WINS OR DESTROYS THE INTEL!!what have i been telling you for many posts now!
its not ABOUT COMPANY ITS ABOUT PERFORMANCE stop messing up with people how old are you!
to tell the truth 8350 is not the best cpu ever made!relax and return to reality!also stop insulting people!
 


Gennaios...just stop...you want me to take the abuse from intel fan boys and not show you REALITY? That's asking too much...

@kindredsouls:

http://www.overclock.net/t/1318995/official-fx-8320-fx-8350-vishera-owners-club

Have at it...there's a graph...the end is some what cut off...but you can make out the Cinebench scores anyway...highest is 8.5 so far as I can tell...if you want to, go ahead and read through the guys posts, there's images and such in there to show benchmark screenshots etc.
 


In a few titles there is a significant margin. Most games, you wouldn't even know the difference unless there was an FPS meter showing you what you are getting. From a budget standpoint, AMD is the way to go. You can get a very capable CPU for less. AMD boards are generally much cheaper than a similarly equipped Intel board. Going the AMD route allows you to then get a much faster GPU than if you had to budget around an i5 platform. An FX 6300 and MSI 970a-g46 would cost me roughly half of what my i5 and z77 exteme4 did while still having all the features that are important to me, mainly support for either CF or SLI. That figure is based on Microcenter prices after all discounts.

I went with the 3570k because, at the time, it was the most convenient. My file server had a motherboard failure, so I put my i5 2400 and P67 extreme 4 in it. I didn't want to swap it back out and didn't want to downgrade to a Phenom II or i3, so I took advantage of the 3570k and $50 off motherboard deal. An i3 combo would have been about the same cost and a non k series i5 would have costed more on the combo.
 


You got it all wrong. It seems you confound compatible with optimized. What he said about the next-gen of AMD-optimized games was right.

 


Therefore you found just another invalid review there out. Congrats!

A first look and one can find issues such as running twice more memory on some of the i7 Intel configurations in the comparison or using 1600MHz on the AMD FX 8350 chips, when their native memory bus speed is of 1866MHz... And of course software and drivers selected were adding extra bottlenecks on the AMD part.

When comparison is repeated in more fair settings, people finds that the FX 8350 beats the i5s and even beats the i7-3770K at stock speed.
 




Do they not teach science in school anymore?

The point of an experiment is to remove all inconsistencies. For a CPU experiment, that means only the CPU and motherboard should be different. For a motherboard, that means only the motherboard should be different. For a GPU, that means only the GPU should be different. For RAM, that means only RAM should be different. So on and so forth.

Who cares if IB takes 1600Mhz RAM and PD uses 1866Mhz? Not only is RAM not a bottleneck in gaming... or really anything except RAM benchmarks... it has nothing to do with the processor's capability. If you honestly think the choice of RAM makes a fx8350 fall way behind a 3470 and more inline with a 3225, you need to check your scientific method. As long as it's the same on both systems, there is no bias.
 


Well, in a scenario where you are only running single threaded applications you're right...

In a scenario where you're running heavy threaded applications, RAM Bandwidth makes a difference. The issue becomes how much data you can get to the processor since it is capable of running more calculations than the system can feed it.

If you look at the Max settings benchmarks, the AMD CPU usage is under 50% through 95%+ of it, and those are CPU intensive games, while the Intel CPU usage spikes up into the high 60% low 70% range regularly...

The question you have to ask is...why?

Well, the intel system's chip is designed to handle single threaded applications well, thus the higher CPU usage from those types of applications. Now, the AMD chip is designed differently for heavily threaded applications, which means, if you cannot get enough information to the CPU, the bottleneck is in the data bandwidth, because it prevents the CPU from running at full load and processing all the data optimally.

This hypothesis rings true in the data from benchmarks as well, in those benchmarks, it shows that AMD CPU/APU's benefit far greater than Intel CPUs from increased RAM bandwidth because it means they can access more information faster, and the CPU can be utilized more efficiently.

The AMD design is ahead of it's time, and once the rest of the hardware and software begin to catch up...you'll understand why data bandwidth is important.

 


Weren't you just in another thread bashing AMD. Make up your mind. LoL

On that note. Why are we even arguing over this? When it comes to gaming the biggest performance booster is the GPU. 1-3% performance difference between an AMD or Intel CPU is hardly anything to get worked up about. Sheesh!
 


I have never said anything that bashes AMD...they have some faults here and there...but I would never bash anybody...

I have never said intel product was bad, I would just prefer ignorant people not run around spouting non-sense about intel being a world beater...
 


No. If you are testing different RAM modules, then you would use same motherboard, same chip, same everything.

If you are testing two chips of the same architecture (e.g. i5 3570K vs i7 3770K), then you would use same motherboard, same memory, same everything.

But if you are testing two chips of different architectures you must use different architectural elements such as different motherboards and memory configurations. The Intel ivi is designed for using a 1600MHz memory bus at stock configuration, but the FX is designed to use a 1866MHz memory bus at stock configuration. Caches, internal logic... are designed according to the rest of elements of the architecture such as stock memory bandwidth, channels...

If you force the FX chip to use a slower 1600MHz memory then you are artificially favouring the Intel chip which is running at its stock speed.

Memory speed has a important role in any task where the cpu can be bottlenecking. Your claim that memory speed has only importance on "RAM benchmarks" shows again the little that you know. Maybe you believe that people run RAM benchmarks because are bored but they run those benchmark because RAM speed can affect chip performance.

In fact gaming is also affected by RAM speed. An Intel i7-3770k gets up to a 21% more FPS in skyrim (yes the same game that you cited above) when using faster RAM modules

mem%20scaling%20skyrim.png


And AMD chips are more affected by memory bandwidth due to the design of their architecture.

I understand these are not good news for people who wasted money on an expensive i5, but when the benchmarks are made in an unbiased way, the FX-8350 beats any i5 and even beats the i7-3770K, although current software is not still using all advanced elements found in the AMD architecture design.

As said above, future games will be optimized for AMD architecture thanks to PS4 and other consoles.
 


The expected or estimated performance increase of 20% is not for the CPU core itself. That is the estimated performance increase for the iGPU. The Intel HD 4600 in Haswell is estimated to be 20% faster than the Intel HD 4000 in Ivy Bridge. That is mainly due to the increase in the number of shaders from 16 to 20.
 
The extra 100MHz is negligible compared to 4 more shaders. From a gaming perspective, the Intel HD 4600 is still to weak anyway for anyone who wants to build a gaming rig.

Improved integrated graphics performance is more important in laptop than desktops since it basically increases the minimum performance. While the average Haswell or Richland laptop relying on just the iGPU is not going to run graphic intensive games very well, at least they can provide some level of enjoyment for games with modest graphic requirements.
 


That is one arena where AMD really excels, the Richland on board graphics are supposed to be right on the heels of a HD 7750 card
 


What graphs? You linked to a Vishera Owners Club.

I keep looking around the forums and you talk about how everyone is spewing lies, sucking off Intel, and bashing AMD, when mostly, people are posting reputable reviews from reputable sites and not favoring Intel or AMD. And then you post a link to a Vishera Owners Club as a source.
 
AMD hands down holds the performance edge over anything Intel has to offer. 2 Cores Physical 2 Virtual cores (hyper threaded) vs 8 physical cores. The i5 chips don't even have hyper threading so what you get stuck with really is 2 phyiscal cores versus Intel versus 8 physical AMD (superior quality) cores. Honestly at this point I don't even know why Intel continues to waste our time.
 


There are a multitude of i5 CPUs that have 4 physical cores. And there are 2 cores in 4 different modules in AMD chips, and those 2 cores share resources; you're saying it like AMD has 8 physical and separate cores.

 


I think that is a bit of a stretch. Benchmarks have shown that the Radeon HD 7660D is a little faster than the desktop Radeon HD 5570 graphics card. I would called the HD 7660D a "desktop Radeon HD 5580" if such a card ever existed.

A performance increase from a "Radeon HD 5580" to a Radeon HD 5770 would be astronomical for an iGPU. The Radeon HD 7750 is basically equal to the Radeon HD 6770 and the HD 6770 is really just a re-badged Radeon HD 5770.

Not having research anything about Richland's iGPU yet, I think that it's performance will be closer much closer to the Radeon HD 6670 than the Radeon HD 6770 (HD 7750) by a wide margin. I'll refine that a bit more and state that my estimate of Richland's highest performing iGPU will be somewhere in between the Radeon HD 5670 and the Radeon HD 6670. The Radeon HD 6670 is has about 66% the performance of the Radeon HD 5770/6770/7750. A Radeon HD 5570 in turn has about 66% the performance of the Radeon HD 5670.

Additionally, Trinity has 1.5 billion transistors, granted I do not know how many transistors is devoted towards the Radeon HD 7660D. Richland's transistor count will likely be a little more than that. However, I do know that the Radeon HD 7750 has 1.3 billion transistors. Combining a Radeon HD 7750 like iGPU into the Richland APU will likely mean the transistor count will go beyond 2 billion which means a larger APU and higher production costs. Plus it will also threaten AMD's low end and mainstream graphic card sales which represents the bulk graphic card sales. Such a move by AMD will mean that Richland will need to sell at a higher price to cover increased production cost; threatening sales. And Richland will also undermine AMD's own graphic card sales. Both are bad for AMD's long term operations.
 
All the i5's that I have come across are dual cores with hyper threading, I say this on my i5 3210m. Modules? These little chips are first 8 core design. Once again AMD has done it again back in the day their original Phenom chips were the first quad core design.



 


Some AMD representative announced that Richland A10 will be between 20-40% faster than Trinity A10. Using that, we would wait the new APU to be somewhere between a Radeon HD 6670 and a HD 7750.

However, some sites report that Richland will be faster than a HD 7750

http://wccftech.com/amds-kaveri-based-28nm-richland-apu-features-steamroller-cores-compatibility-fm2-socket/

Time will say!
 


Another uneducated user, passing along bad information.

DESKTOP i5s are ALL 4 cores / 4 threads. i3s are 2 cores / 4 threads and i7s are 4 cores / 8 threads. Pentiums are Intel's 2 cores / 2 threads CPU on desktop.

In the mobile space, AMD is the way to go for a number of reasons. First, gaming and laptop are mutually exclusive. You CAN play video games on a laptop, but the experience is much different than on a desktop. Second, AMD is much more competitive on the mobile space than they are on the desktop. Third, AMD provides much better gaming value with their APUs in the sub $800-$1000 range -- the point where you can get a decent discrete mobile GPU -- because their on-board graphics are a lot better than Intel's.

However, in the desktop space where you'll be installing a discrete GPU, Intel is the way to go. It hasn't always been that way, and hopefully it won't always be that way, but Intel has dominated the desktop enthusiast CPU for the last 2-3 years -- since Sandy Bridge was released. I will always buy what is the better value and will urge others to do the same. If you want to support someone else, fine, but please don't lie to others about what is actually the best buy. I have no brand loyalty to Intel, Nvidia, AMD, Samsung, Muskin, etc. I buy what is the best value to fit my needs.

A personal friend of mine asked me to pick parts for a PC he wanted to build. During the course of the day the SSD I chose for him went up $20 on newegg. I did not stick with that model for him, because at the new price, another model was a better value (i.e. faster for the same money). That is what we should do as consumers -- reward manufacturers for providing better VALUE.