FX 8350 or i7 4770k

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

ANIR0X2K00L

Honorable
Mar 10, 2013
69
0
10,640
I need help with this topic, i am getting a new PC and i can wait till then end of this year. The problem is that i will have to keep this PC for the next 5 to 6 years. I can spend extra to get a 4770k when it release but from what i have heard is that next gen games would run better on AMD hardware and will require more cores. I play graphically extensive games like Battlefield 3, Crysis 2, Crysis 3. I will be getting battlefield 4 so this will give u an idea of what type of games i am playing and all of these games need a good cpu to run, especially crysis 3 and possible Battlefield 4. Please just tell me that should i go with fx 8350 or the i7 4770k (when it releases in june), or should i wait for amd's steamroller or intel's broadwell. And please even list a good motherboard like asus maximus or crosshair formula motherboard or something better. I have noticed one problem that the AMD chipset i.e. 990fx dont have all the latest features.

Any help is greatly appreciated, please understand that i dont want to make a mistake cause if i make one i will regret it for the next 5 to 6 years.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


The PS4 includes an extra ARM CPU for dealing with social stuff, downloads, and other background tasks.
 
G

Guest

Guest
Still denying that Intel has 3D transistors? Sad, really. When AMD is bought bought by Intel in 5 years, will you still buy their products? Well, you convinced me. I'm going to spend my next paycheck on an AMD system. Then, I'm going to go on every thread about Intel vs AMD and tell the world how my PC is the best, even if it is only a .05% difference in only one game (Crysis 3, obviously).

Keep acting like you work as a spokesperson for AMD while nobody anywhere cares or is even aware of your existence. Also, try putting the mouse down once in a while and emerge from mommy's basement and get some sunlight.

This forum is about helping the OP. We helped the OP. He decided to buy a Haswell. The thread should be closed. Anything else you or I say is irrelevant. Now, go start your own thread and fill it with as much AMD propaganda as you want. You are still not going to convince the 9 out of 10 people who buy Intel PCs to go AMD. You are just going to start a circle jerk with the 1 out of 10 that are too poor to afford Intel and buy AMD. Period. End of story. Have a nice life. I don't need to prove anything to anyone. Oh, and Intel IB CPUs have 3D aka tri-gate transistors. http://en.wikipedia.org/wiki/3D_transistor#Tri-gate_transistors A FinFET is a double gate transistor, http://en.wikipedia.org/wiki/3D_transistor#FinFETs
 
G

Guest

Guest
When you are done beating it to printouts of your CPU benchmark on Crysis 3 in 720p, take a look at a few articles on Intel's non-existent tri-gate transistor technology:

https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CCwQFjAA&url=http%3A%2F%2Fwww.bbc.co.uk%2Fnews%2Ftechnology-17785464&ei=QpWRUaKSOY3wqAGBhoHgBw&usg=AFQjCNGD10KVqFJiZHjlzkqhVZL8F2aLwg&sig2=_ueTaSkg38G7UazHkWcU2A&bvm=bv.46471029,d.aWM

https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=10&cad=rja&ved=0CJQBEBYwCQ&url=http%3A%2F%2Fblog.chron.com%2Ftechblog%2F2012%2F04%2Fintels-ivy-bridge-chips-have-3d-transistors-more-muscle-for-less-power%2F&ei=QpWRUaKSOY3wqAGBhoHgBw&usg=AFQjCNFpoO8mpWCF9Ll9oNYmLVYrshhM2g&sig2=Bs0Is5p4fs6KZc6udmyQ-A&bvm=bv.46471029,d.aWM

https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=11&cad=rja&ved=0CCwQFjAAOAo&url=http%3A%2F%2Fappleinsider.com%2Farticles%2F12%2F04%2F23%2Fintel_launches_next_gen_ivy_bridge_processors_with_3d_transistors&ei=bZWRUdvwHImvrQGw-YHIBA&usg=AFQjCNFowuWzSWYVryxKe8zy7RcPcBOx9A&sig2=ldTdkWOJOzO8jd12H7gloA&bvm=bv.46471029,d.aWM

Or, read the newegg description of the 3770k (and every other IB CPU) the first thing it mentions is: INTEL 22NM 3-D TRANSISTOR
The 3rd Generation Intel Core processors employ the world's first 3-D transistor manufactured at 22nm. Intel's 3-D Tri-Gate transistor uses three gates wrapped around the silicon channel in a 3-D structure, enabling an unprecedented combination of performance and energy efficiency, and ensuring that the pace of technology advancement consumers expect can continue for years to come.:

http://www.newegg.com/Product/Product.aspx?Item=N82E16819116501

or, just look at the Google results:

https://www.google.com/search?q=intel+3d+transistor&rlz=1C1ASUT_enUS532US532&aq=0&oq=Intel+3D+&aqs=chrome.1.57j0l3j62l2.3912j0&sourceid=chrome&ie=UTF-8#rlz=1C1ASUT_enUS532US532&q=intel+3d+transistor+technology&revid=872158344&sa=X&ei=bZWRUdvwHImvrQGw-YHIBA&ved=0CLQBENUCKAI4Cg&bav=on.2,or.r_cp.r_qf.&bvm=bv.46471029,d.aWM&fp=f4410b1b5d67c92a&biw=1366&bih=667
 

8350rocks

Distinguished


Tri-gate is not a 3D CPU...you're sadly mistaken...and gates are not dimensions...they're stacked on top of each other...

Additionally...it's not 9 out of 10...it's more like 6 or 7...and that number is growing smaller by the month. AMD gained 1.2% market share over the last 60 Days...doesn't sound like a lot...but 1.2% of a few hundred million...is a few million.

Keep on talking and not supporting your facts.

Tri-gate or 3-D Transistor (not to be confused with 3D microchips) fabrication is used by Intel Corporation for the nonplanar transistor architecture used in Ivy Bridge processors. These transistors employ a single gate stacked on top of two vertical gates allowing for essentially three times the surface area for electrons to travel

Taken straight from your Wiki. All they did was stack another gate on top of 2...they created more surface area. But it isn't as revolutionary as you make it sound.

From your same article:

Dual gate MOSFETs are commonly used in VHF mixers and in sensitive VHF front end amplifiers. They are available from manufacturers such as Motorola, NXP, and Hitachi

Dual gates have been around a long time...it's not that big a deal to add another.

Besides...AMD has been using SOI for quite some time, intel uses bulk wafers:

http://en.wikipedia.org/wiki/Silicon_on_Insulator

Intel has even conceded that going to SOI would be an improvement for them, but it is "too expensive".

http://www.eetimes.com/electronics-news/4374728/Intel-FinFETs-SOI-shrink-GSS

FYI: The OP hasn't bought haswell...it's not out yet.

@OP: Save yourself time effort and money...
 

whyso

Distinguished
Jan 15, 2012
689
0
19,060


Just saying that the initial games will not have much "magic sauce" but later ones will. No one wants to have 8 cores at 1.6 ghz vs 4 at 3.2 to program for. It takes time and time is money. For a given budget I'd rather have less time and money at optimizing and coding for 8 cores vs time spent on design.
 

elemein

Honorable
Mar 4, 2013
802
0
11,160


Later ones will is true.

Though, the statement about people would rather have 4 cores at 3.2 than 8 cores at 1.6 isn't exactly true. I myself find situations where I'd rather have 2 processors of even half the speed than a fast single processor. Now, my situation may be a little unique as almost 75% of my programming goes towards programming robots for hobby and for school, but it's certainly true that even in a x86, computer environment, while writing my software, I'd rather use two cores rather than one. Even if it takes a little more time, the time saved by having two cores to work with rather than one saves a heck of a lot more time than if I had to figure out a work around for a certain operation to work on a single core.

For example; what if I want one operation to not effect another; an operation that is timed. What if I want an action to be performed, while a sound is played, but have the two correspond to eachother. For example, what if I want an action to be checked for while having a sound played in the background? Sure, a single fast processor is probably fast enough to encode both sound and check actions in unison, but what if it were two, much, much more complicated tasks? Sometimes, two slower cores is simply better than one faster one.
 

whyso

Distinguished
Jan 15, 2012
689
0
19,060


 

davschall

Honorable
Nov 19, 2012
123
0
10,710


Wow triple post jman really, what to good for the edit button? Also I seriously hate when people who know nothing about cars make references to them, I got some dumbass HERE comparing a corvette to a bmw M5 I mean talk about apples to oranges. Just enough with the car references, especially when you consider the fastest car to be a ferrari, no a specific one such as a ferrari california (notably slow) just a ferrari. Im not gonna get into the rest of the post cause its just little nerd rages.
 

davschall

Honorable
Nov 19, 2012
123
0
10,710


Dude you convinced me lol, I decided that my kick ass gigabyte 990fxa-ud5, was worth keeping around, ive seen to much to be on the fence between them now, as im already on an upgrade path of AMD ill just stick with them, lol im not full fanboy yet though...never go full fanyboy, something that rds1220 could learn lol. I mean seriously Warning angry person alert So my advice stop
 

SomeNickname

Honorable
Nov 17, 2012
13
0
10,510

Learn to economics. When AMD is off the table, Intel could just double every single price, because why not? There's no competition left. There is not a single consumer to be happy about that.
AMD will stay for a long time. They have been called dead often enough, but end the end they have their market (Good performance CPU at excellent price-points), so why should they go bankrupt/let them sell themselves?


*slow clap* Welcome to the bottom of the discussion pyramid.
disagreement-hierarchy.jpg



That's good, cause you don't. All you do is calling out some fancy buzzwords about Intels architecture, but you never actually explain why that stuff will show more benefit in future games than more cores. We already can see the benefit in current benchmarks, is there any reason to belive that benefit will grow bigger at the same CPU with future games?


Again if we want to help the OP we have to discuss raw power. Not the design decisions of the architecture (except you have a argument why that will actually have impact aside from current benchmarks), not power consumption, not the net worths of the companies. Just the current performance and some speculations about benefits in future games.
 
wow, how has this thread continued so long with so much bullsh!t?
Please 8350 fanboys, stop recommending the 8350 as a superior gaming cpu to intel i5 equivelants. It is not, as has been proven in 95% of the review sites out there. Throwing around links to the remaining 5% that show otherwise is rediculous, you are naïve if you believe them over the 95% that show the opposite.

As for speculation about ps4 with its highly threadedness, well no one knows how that will pan out, it will probably allow amd to catch up a bit in gaming, keep in mind that although the 8 core 8350 has 8 integer cores, it only has 4 FPU cores, which is what games use the most. the fact that ps4 is able to handle a lot of threads at once using the cpu and gpu together is good, but when it does, this it takes away graphics processing power. The analogy of you pull one lever too hard and another lever goes flying in the other direction comes to mind. It can be well balanced in the ps3 but with so many hardware variations in a pc I doubt we will see the same optimizations, game devs will still have to play it safe as they always have done with ports.
 

logainofhades

Titan
Moderator


This post makes me want to post that Billy Madison youtube link again. You are nothing but an Intel fanboy. Your kids vs adult analogy makes that quite obvious. I have owned plenty of both AMD and Intel and not one "caught on fire". I have had a PSU die and zap an unlocked Sempron 145 but it didn't burn. Most of my systems are Intel now, but still have a dual core PhII rig that is used for netflix and streaming video off my file server to the TV. Sometimes it gets used as a WoW rig, when needed, if I have another system that is down for whatever reason. I have an i5 3570k, i5 2400, i5 750 and my laptop has a dual core i5. I was given the 2400 and the 750 was bought used from a person I know on another forum, with a motherboard, for about the cost of what the CPU was new at the time.

Had the Vishera FX's been around when I needed a new CPU and board last year, I probably would have bought one over the 3570k I have now. Would have had similar performance for like half the cost @ Microcenter. I fully admit I am a cheapskate. I don't see any value in spending more on hardware that doesn't give me a big benefit in what I do. All those high and pretty numbers look great. At the end of the day, you can't see over 60fps anyway.

To the OP, 4770k will be a nice chip, without a doubt. It will be quite capable for a good while. It isn't much faster than a 3770k though. An FX 8350 is also a very capable chip for much less money. I would go with the 8350 and spend the $130 saved on a better graphics card. 3770k isn't going to game better with an inferior graphics card than an FX 8350 with a much faster card. $130 more to spend on a graphics card is quite a bit and will give you a much better bang for the buck in GPU intensive titles.
 

8350rocks

Distinguished


Actually, know your facts, the 8 core Jaguar APU in PS4/XBOX720 has 8 FPUs.

Additionally show me where "95%" of games are better on an i5 then the FX8350. Don't count Skyrim, Civ5 or SC2...those were singled out already. If you bring up planetside 2 I will have a good hardy laugh at that one too.

Additionally...the Margin for Error on Benchmarks for gaming is approximately 10% for hardware inconsistencies. If you can produce even 10 different games where the difference is greater than 10% between the 2...I will be amazed.

So, the ball is in your court...
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
 

10% margin of error....please....a 10% improvement is significant. I would give more like a 2% margin of error for most benchmarking.
didn't say 95% of games, said 95% of review sites.
http://techreport.com/review/23750/amd-fx-8350-processor-reviewed/9
http://www.guru3d.com/articles_pages/amd_fx_8350_processor_review,18.html
http://www.anandtech.com/show/6396/the-vishera-review-amd-fx8350-fx8320-fx6300-and-fx4300-tested/5
http://www.bit-tech.net/hardware/2012/11/06/amd-fx-8350-review/6
http://www.behardware.com/articles/880-13/amd-fx-8350-review-is-amd-back.html
http://hexus.net/tech/reviews/cpu/46985-amd-fx-8350/?page=5
http://www.xbitlabs.com/articles/cpu/display/fx-8350-8320-6300-4300_6.html#sect0
http://www.xbitlabs.com/articles/cpu/display/amd-fx-8350_6.html#sect0
http://www.tomshardware.com/reviews/far-cry-3-performance-benchmark,3379-7.html
http://www.guru3d.com/articles_pages/far_cry_3_graphics_performance_review_benchmark,7.html
http://www.tomshardware.com/reviews/crysis-3-performance-benchmark-gaming,3451-8.html
I will say in crisis 3 it doesn't do too shabby in average FPS, but dips to a low fps although that may have just been a spike.
 

whyso

Distinguished
Jan 15, 2012
689
0
19,060
Yes, my point was that in the top 20 supercomputers nearly half are xeon based. Only three are opteron based. Saying xeon is a poor cpu for supercomputers is completely wrong. Sure titan is the most powerful but the only supercomputers using opterons in the top 20 are CRAY computers.

Hardly an official review from a reputable site.

The 8350 is not an apu so lets forget the "RAM is holding it back" argument. APU maybe (but only with gpu related tasks) but straight FX piledriver? No benefit to faster RAM.

HT is becoming more and more useful. Again is not that HT is useless, its that games don't take advantage of more than 4 cores (this is why an i3 creams a pentium in games requiring 3 or more threads and often why it beats the 8350 in lightly threaded games). That review is also a year an a half old, newer games such as FC3, crysis 3 show some improvements from HT.

Do you even read what you post about? That power test was running 'Intel Burn Test" which is probably one of the most demanding things you can do on a cpu.

And in the mobile comparison I don't care about gpu performance. We are only running the cpu, gpu is idling, using none or very little of the tdp headroom. When we stress both the cpu and gpu on the a10-4600m the clocks drop; in cpu only tests the cpu is using the igpu thermal headroom.

And yes, the fact the amd's chipsets use more power than intel's does matter. Because if you use the cpu you need a chipset and if your chipset is using more power then your cpu is using more power, similar to how early atom cpus had a low tdp but required power hungry chipsets and so the SYSTEM needed more power.

I don't think you quite get the concept of efficiency. If it uses the same amount of power to do a given amount of work then its the same efficiency. Bulldozer is not more efficient than sandy bridge in that test and if it required 6 kilojoules then that is the same amount of energy as sandy bridge and so the same efficiency.
 

8350rocks

Distinguished


Actually, it is typically expected that each system component inside a PC included in a benchmark that is a same make/model part, but a different individual part will accrue approximately 1% Margin of error due to manufacturing tolerances.

Therefore...if we assume that all components are in 2 separate cases, same make model (except CPU/MB of course) but different individual parts, the following is true:

PSU/RAM/GPU/HDD = 4% MoE

Additionally different components entirely introduce about double the margin of error to the test, because the results cannot establish a baseline inside the same sample.

Therefore:
CPU/MB = ~4%

Intel benchmarks performed with the exact same components sitting in separate cases, all identical make/model are subject to a minimum of 6% MoE if all components are different individual parts. Compare Intel/AMD and the number increases by approximately 25% because of the variability of different components (even by the same manufacturer).

Meaning average margin of error is 8% on such tests. I usually round up for human error to 10%. Nitpick all you want...

The fact of the matter is 8% of 120 FPS is 9.6 FPS or 10 FPS. Intel doesn't cover margin of error very often, if at all. (Skyrim I think they do, but it is an outlier).

:)
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


For 50 fps that is a +-1 fps. Sorry but nobody believes that!

For power measurements I would take about a 30% of variation.
 

8350rocks

Distinguished
@whyso:

If you say increased memory bandwidth does not benefit AMD...then why does the throughput scale significantly better than intel with bandwidth increases. Even Tom's reviews of the memory benchmarks noted that the throughput scales better with AMD...and they even comment that anything over 1600 MHz bandwidth on the intel CPUs is essentially worthless as the gain is almost nothing.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I did not say the Xeon is a "poor cpu". I said something different. I said that has poor scaling than Opteron. All xeon based supercomputers in that list use a small number of cores and offer weak performance when compared with Titan. If you go to #22 you can find an Opteron based supercomputer that is not from Cray.



Why do you believe I mentioned both CPUs and APUs?

I already gave you benchmarks showing appreciable performance gains (6-10%) due to faster RAM for Bulldozer.



I already answered this. I will add some more info from the thread "Hyperthreading, does it help games now?"

For the most part you won't find many games that will get anything from hyper-threading. While there are few that may get a performance boost out of it in most cases they are going to be heavy multi-threading applications that would do better with a full core to run over the hyper-threading.

No.

But benchmarks done over the past several years have shown that when Hyper Threading is enabled, it generally causes an average drop of 2% in game performance.

http://www.tomshardware.co.uk/forum/366813-28-hyperthreading-gaming



First, I was not worried about absolute values but about the relative increase from going from Ivi to Haswell. Second, it does not change the point. The point was that the chip supposed to offer better perf per wat is offering poor perf per wat.



Others care.



I only can recommend you to read what was actually said.
 

whyso

Distinguished
Jan 15, 2012
689
0
19,060


The first argument is complete nonsense (I have never seen any evidence of poor scaling--If you look at the Rmax and Rpeak values Titan is not special getting a similar proportion of actual FLOPS from Practical FLOPS). A couple points as well.
A lot of titan's power comes from the nvidia gpus (LOL actually because titan would not be number one without them). Titan has 18688 cpus (16 cores each); a lot of other computers have a similar number of cpus. The fastest supercomputer does not make all other supercomputers irrelevant. The number 22 supercomputer using opterons was commissioned in 2008.

There may be one benchmark for 5% performance gain from faster RAM but on average its closer to 0-2% which is pretty much insignificant.

That thread you posted a link to is old and hardly relevant anymore. And yes in BF3 MP, FC3, crysis3 you see a little performance gain between i5 and i7. 2% performance drop is nothing (margin of error), at 60fps that is one fps and far overshadowed by other factors (and this is really not seen at all in benchmarks).

Intel burn test is a power virus. We are more concerned about power use in real world applications. Just because power consumption in a power virus is high does not mean that power consumption something like x264 encoding will be as high (you can also look at the fermi/6xxx series power consumption tests; fermi really goes wild in Furmark but the difference is smaller in games).

43137.png


43138.png


In metro the 570 uses roughly the same amount of power as the 7970, in OCCT it consumes significantly more (differing performance aside). Same goes for cpus.

Others may care about gpu performance in that comparison but in that specific comparison we are not looking at gpu performance.

 

chrisafp07

Distinguished
Nov 27, 2012
783
0
19,060
To the original poster. The 8350 is a great cpu for gaming,. along with the i5 3570k. I would suggest you go for Haswell or Steamroller though. The 8350 would only be more viable than grabbing haswell because you can then wait for steamroller. By the way, I play Battlefield 3 all the time with my 8350 in Ultra settings Vsync on, 60fps at all times. I have had no issues whatsoever in Crysis 3 either, so if you are asking the difference between these cpus, well it really won't be that noticeable, you can look at benchmarks with minimal fps differences and see that. AMD just may give you more upgrades for those next 5-6 years with AM3+. As a sidenote, Intel is always a safe route for performance as well in any category.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Repeating the word "nonsense" all the day is not going to change facts. From a link given before:

In a variety of computing benchmarks, the Opteron architecture has demonstrated better multi-processor scaling than the Intel Xeon. This is primarily because adding an additional Opteron processor increases memory bandwidth, while that is not always the case for Xeon systems, and the fact that the Opterons use a switched fabric, rather than a shared bus. In particular, the Opteron's integrated memory controller allows the CPU to access local RAM very quickly. In contrast, multiprocessor Xeon system CPUs share only two common buses for both processor-processor and processor-memory communication. As the number of CPUs increases in a typical Xeon system, contention for the shared bus causes computing efficiency to drop.

I have not given you one benchmark showing 5%. I have given you benchmarks showing 6-10%. I did because you claimed no advantage. Now you change your argument to no advantage on the average, but then two comments: I don't care about the average if I am playing one of those benchmarked games and you don't offer anything to support your average figure.

The thread I posted is from 27 February 2013. Yes computers advance fast but not that fast. What they said about HT continue being valid today.

About power consumption I recommend you again the same: read what was said.