FX 8350 vs i5 4670K for video editing/rendering/some gaming/3d graphics

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

nonni123

Reputable
Mar 2, 2014
17
0
4,510
So, Im about to build myself a brand new rig, mostly for video editing (Videostudio Pro X6 atm, probably the adobe suite later on) and my question is, would the raw amount of cores of the 8350 combined with something like an Asus Sabertooth 900FX mobo, good all in one liquid cooler and some overclocking benefit me from ''only'' 4 cores on the i5? Like the title says,mostly editing/rendering but also some 3d graphics and some gaiming (I say some, because I already have a PS4 and Im loving it) :ange:
Thanks in advance 😀
 


Uhm I think you mistyped that, because according to what your saying "Haswell for an example have 4 ALU per core" would then result in a i3 which has 2 "cores" would have 8 ALUs (same as a i7????). In actuality, Intel broke the processing into 'micro-operations' which is where the Hyperthreading helps on. Crash course on all three generations here: http://www.anandtech.com/show/6355/intels-haswell-architecture/8

As you will note there are different ALUs we are discussing (Naehelm for example has '5' ALUs technically) but as well Store Data, Vectors, and more to the solution. Mainly though the bigger difference in how the "micro-ops" occur by breaking down the processing into six ports and how to ensure these don't conflict (as I am mentioning below) is where the Threading comes into play. In the case of Intel, they came up with the better solution. IF AMD had come up with a similiar or came up originally with the Hyper threading method and applied it to the current base of physical cores (for example the FX-6xxx line) the performance would be the difference of a Ferrarri vs a Beetle. Unfortunately AMD has not address this issue, and shows in multiple performance test that no matter how many physical cores they 'added' to keep pace with Intel's iCore line of virtual and physical cores (see below) the performance levels didn't match even half the expectations unfortunately (8 core AMD should trounce a 'virtual' 8 core i7 for example).



Uhmm no actually it is exactly as I am describing, as you can check yourself (not based on my education, certificated or been working on computers since 1984) https://www.google.com/search?q=What+is+Hyper+Threading&rlz=1C1VSNC_en&oq=What+is+Hyper+Threading&aqs=chrome..69i57j0l5.5352j0j1&sourceid=chrome&es_sm=122&ie=UTF-8

Some best quotes:
"For each processor core that is physically present, the operating system addresses two virtual or logical cores, and shares the workload between them when possible. The main function of hyper-threading is to decrease the number of dependent instructions on the pipeline."
- http://en.wikipedia.org/wiki/Hyper-threading
"A technology developed by Intel that enables multithreaded software applications to execute threads in parallel on a single multi-core processor instead of processing threads in a linear fashion. Older systems took advantage of dual-processing threading in software by splitting instructions into multiple streams so that more than one processor could act upon them at once."
- http://www.webopedia.com/TERM/H/Hyper_Threading.html
"Strictly speaking, Hyper-Threading is best applied to operations and applications where multiple tasks can be intelligently scheduled so there's no idle time on your processor....operations where tasks have to be done in serial, or where one operation has to take place before another can begin, generally don't benefit from Hyper-Threading. Whether you have a single core or a quad core, Hyper-Threading can optimize tasks that can be conducted in parallel so the whole operation is faster"
- http://lifehacker.com/how-hyper-threading-really-works-and-when-its-actuall-1394216262

So back to my example, pile A moved to pile B, one at a time with multiple cores (like AMD continues to do) / workers to do the task they get in the way of each other (keeping it simple here folks), but with Hyperthreading not only 'faking' a virtual second core, so that 'one core' (worker) picks up two bricks at a time, it appears as though two workers are doing it to the OS (multiple cores) but more importantly when they move to Pile A or to Pile B, they are "intelligently scheduled" so no one gets in the way of the other core. So you improve how often a brick is lifted and put down more rapidly and thus improving performance, even if you had only 'Half' the workers (Intel) doing "twice" the work virtually (each worker picks up 2 bricks instead of 1) because the other workers (AMD) do it one at a time in linear (serial) order.



To the OS / Application later your wrong. To this layer YES they are virtual cores (see explanations and links above to clarify if necessary) because each core is taking two work loads and as one 'virtual core' is processing one part the other 'core' is also processing the workload, in parallel (NOT SERIAL as your inferring and as AMD does at the moment) so as one workload gets done it doesn't wait on the 'core' it hands it off (hard drive, gpu, sound card, etc.) and go grabs another workload. So yes in all instances it is a virtual core, because you have TWO Processing functions happening inside a single physical core, the Threading comes into the play on the I/O scheduling of the 'work' itself in and out (again see the many example and explanations).




Well I could toss at you every BF4 benchmark which shows your wrong, but as well Tom's hardware specifically shows it in other games as well http://www.bit-tech.net/hardware/2013/11/14/intel-core-i3-4130-haswell-review/5 . Nevermind the review here : http://cpu.userbenchmark.com/Compare/Intel-Core-i3-4130-vs-AMD-FX-8-Core-Black-8350/1621vs1489 showing the newer i3 Haswell verses the much older Piledriver (which Piledriver was originally design to compete against the established iX-2xxx and directly incoming iX-3xxx cores specs they were aware of) on Single and Dual Core processes (All current consumer applications and games) is beaten by the i3-4xxx, but obviously if serial multicore processing is done the 8 Core FX-8 would do much better then the 2Core+4Threads (NOT as you said 4 to each 1 above) just as much as the i5 would do better (even by Intel's own sales material) than the i3, and of course i7 does better then the i5 for the very same reasons.

The point being, the Haswell chipset has reset the marketspace that the 'lowest' offering by Intel is on par - beats the ONLY highest core AMD has to offer at the moment for Joe/Jane Consumer. Otherwise I agree with the rest of what you said.



General consumer yes, but for Video Editing and Rendering, no. I defer again to the numerous posts of OTHER people posting (search Tom's Hardware forum for Workstation as I advised) whom are actually in College or Independent Video Editors, or the several Mom and Pop Video Editing 'companies' that have posted here, and the numbers I used and justification for it were repeatedly used. Again this is about 'time', and yes some did complain on their i7 with 8GB of RAM trying to render 15sec of video at a time (like for example animation sequences) taking HOURS and forget if they wanted to do 30minute long 'shorts' all at once, they didn't want to wait days (which some did try) for it to complete. When discussing just doubling the RAM, this improved, but then the Pros, Independents, etc. weighed in and noted 32GB should be the minimum because using Sony Vegas, CAD, etc. all use both RAM and CPU together for 'crunching' the data before the actual 'rendering' element was handled to the GPU (again noting Quardos and such).



Yes the 'Normal' is a 1080P video format, so I guess that is 'crazy high format' they all want to do and were doing normally. And NO I am not overestimting the power of a Workstation card which is specifically designed to do this sort and more work, but suck on FPS in gaming and isn't just a 'driver' thing. If that all there was, I am VERY sure the cards would not be selling anymore because several dozen people would have posted 'CAD Drivers" for every GPU card made so they too could 'cheaply' allow everyone to do Professional level Video Editing with the established cards everyone can buy at Best Buy/Fry's Electronics/etc.

Again having supported both Workstations, Business PCs, Consumer PCs, Gamer PCs, and such I have had this experiance first hand of seeing the difference, again you can search the forum and see the posts from others ALSO saying the same thing. Or you can just read the NVidia White Paper http://www.nvidia.com/object/quadro_geforce.html, NVidia's own Legally Binding statements on its website http://www.nvidia.com/object/quadro-desktops-pcs-features.html, the performance difference independent review http://www.xbitlabs.com/articles/graphics/display/quadrofx-firepro.html or the best walk through I seen by AMD on what is graphics and how they are much different for Professional Workstation Cards as compared to Gamer's GPUs http://www.amd.com/Documents/49521_Graphics101_Book.pdf

In a nutshell Video Editing, Rendering, CAD, etc. high end video processing is BEST served by Workstation cards but perform lousy in Frames Per Second performance needs as Gamer's cards, Gamer's card push and optimized for FPS demands, but then are deficient powered for Workstation demands in comparison.
 
so, after all these debacles, with some of you thinking i should get the fx 8350/8320 and others thinking i should get the i5 4670k/xeon e3 1230v3, im now looking for a final answer, which cpu would benefit me the most for my purpose? (im leaning towards the 8350 atm)
 
for renders the 8320 or 8350 OC's to 4.4Ghz or more trades SERIOUS blows with the i7 4770k. Leaves the i5 4670k in the dust.

My FX 8320 @ 4.3 ghz under load 55C max temp P95. [power saving features on, balanced power plan]
z3es.jpg


Just look at the Raw power. Integer score blows it out of the dust. Physics based operations also matter a lot for you w renders. Another interesting thing is everyone says 4 FPUs is the fatal flaw of the FX chip, but its apparently doing just fine in floating point math... 😛


Now if you were JUST gaming i would suggest an intel build, but the FX 8 core will do better for all the things you described, while still being a totally worthy gaming CPU as well.
 
@Beezy READ again your scores please. While it has been stated, YES for a test involving SERIAL MULTICORE usage obviously the FX line would work well (8 dedicated real cores), but in REALITY (as noted in the SINGLE THREAD upper right test, and OTHER normal tests - see my links) even some game code optimized for dual core systems all current applications run as SINGLE thread, and SINGLE Core, which is where AMD trips up. Because of the way AMD is addressing the issue the cores get in the way of one another, and thus can't maximize their potential (numerous other scores, tests, and benchmarks shows it is the FX Core witht he issue not GPU, RAM, HDD, I/O, Mobo, etc. just plainly how the FX handles things). Conversely Intel decided to double dip (as I noted) in a PARALLEL PROCESSING so that a single core acts like two processors and each do NOT get in the way of each other to do the same work as the FX, but doing it twice as fast.

Adding more cores and virtual cores (i5 / i7) increases this, NOT OC OF CPU cycles. How many Ghz as a 'answer' to how 'fast' a system can perform is outdated concept because we have been STAGNANT on GHZ scores for almost a decade now, when you compare how much of jump in Hz for a processor happened in 2000, 2001, 2002, etc. We went from Mhz to 10s of Mhz, to 100Mhz, to finally 1GHz and got stuck at around 2Ghz as we hit Moore's Law. A flash back for many, when Intel tried to introduce the newest Pentium chips at around 3GHz and they had to do (the only time ever) factory recall because they were too unstable at that rate. So being OC at 4Ghz while nice short term fix for somethings, and looks better on the charts, the "cost" of reduced life expectancy and "all the work" you had to put into obtaining a stable OC like that ins't 'common' either, as you are probably quite aware. So please don't toss this around as some 'standard' everyone does, otherwise YOU are shooting yourself short and not 'selling ' this to AMD, Dell, etc. as the way to do it 'right' so YES it was a standard and was 'less cost' then just getting a off the shelf i7 Haswell.

My point being not a 'yard stick of ohhh look what I can do' techgeekness, I am saying what the OP, and the rest of 'Dumb Joe/Jane user' is normally going to know, normally capable to do, and is the most cost effective manner that matches what the OP is asking for. I LOVED AMD, I PRAY AMD will get their head out of thier butts, but since iCore came along, there is no real competition anymore, INTEL OWNS IT. IF you do NOT believe me, simple take a gander at the number of results you get on looking up i5 or i7 and any 2013 game then compare that the the DELUGE of FX-4 / FX-6 owners bemoaning to great lengths why 'nothing works shit all broke' when we talk AC4, BF4, etc. etc. etc. AMD has walked away from the fight (no indications of returning yet) and is time the AMD fanboys as well read the writing on the wall..... LIKE ALL THE XP PEOPLE WHOM ARE STILL RUNNING XP! OMG! Time to change people it happens, sorry, but DEAL...
 


but doesnt higher clockspeed=better general single core performance? (judging by MKBHD's reason why he chose the 8-core mac pro vs the 12-core one)
 


Maybe you dont realize it but renders do utilize all cores at once? Plus you seem to focus on just gaming, OP only needs gaming secondary to the above tasks, and just sometimes. Lol you make it seem like an FX chip cant handle launching Chrome. the cores get used when and where it counts, in the renders. Yes those programs OP mentioned will take advantage of as many cores as you can throw at them. The i7 4770K is great, but at 2x the cost. so. theres that.

"YES for a test involving SERIAL MULTICORE usage obviously the FX line would work well (8 dedicated real cores)"

also i feel like you dont really know what you are talking about. FX 8320/8350; 4 modules. 2 logical cores per module, and one floating point per module. those are not traditional "real" cores, still clearly does not affect the performance
 


Still, in OPs case, a FX 8350 would be a much better fit between the two. Look at every benchmark that includes the i5 4670K and FX 8350 and involves the tasks OP is going to do with his/her PC.

And don't bring the i7 into the mix, since OP is not even considering that CPU.

The discussion is about i5 4670K vs FX 8350 in OPs specific case on what he/she is planning to do, and nothing else.
 


oh and btw, how much do you think i couldoverclock it with an nzxt kraken x60 and a gigabyte ga-990fxa-ud3??
 


Alot, but around 4,8 - 4,9 ghz should be fine with that cooler.
Just be sure to run stability tests after each time you overclock.
 
I built a video editing/gaming machine for a friend of mine and with y experience anything lower than a i7 will lose to a AMD fx 8350 or 8320.
My arguments are that i5 i3 dosen't have hyperthreading technology and if you want to do a crossfire, do a sli, do a raid or anything that you want will cost you more in motherboard. You need to do a balanced machine/price and because of that you can go with better results with amd if you are in a budget if not go for a i7.

My opinion
-pick an amd (8320 or a 8350)
-go for a msi970a-g43 moba at least (raid,crossfire)
-try to get a ssd(for editing) and a hard disk(for stuff and games)
-pick at least 8 gb ramm
-try gtx 760
 
Intel would be the way to go if your #1 priority was gaming, with everything else falling in line after that. But since your top priority seems to be editing/rendering, I can't see any reason not to go with the 8320/8350. Though, if I were in your shoes, I would purchase a cheaper 970 motherboard and no cooler. Both of those CPU's should perform your required tasks just fine at stock speed. Not to mention the money you'd save.
 

Haswell do have 4 ALUs per core. An haswell i3 Have 8 ALUs, meanwhile a i5/i7 have 16 ALUs, I dont know if you dont know the difference between a core and a thread.

How exatcly does Nehalem have 5 ALUs? I cannot see how that makes sense?.
AGUs and SIMD differ more than ALU from architecture to architecture.
There are 1 kind of ALU in the modern CPUs meanwhile there are different kind of SIMD that are used for different things.

First thing first:
Intel didn't invent SMT. AMD could in theory add SMT to their architecture, AMD is instead using CMT (cluster core architecture), which are used for something completely different.

Either you are not listening or you are blind in false information.

Let be break it really simple to you:
8350 = 16 ALUs and 16 FPU
4670k = 16 ALUs and 24 FPU

This is the reason(basics ofcs) why a 8 core piledriver aren't crushing a 4 core haswell processor.
If one of the EUs pipelines on the 8350 isn't in use, then its performance will be under 4670k.

There are no such thing as a virtual core, forget it.

CMT give MORE performance than SMT, because you are adding more components to the core, whereof with SMT you are improven the performance with the core.



Ehm no. You are spewing out BS you dont even seems to understand.

How is this not exactly like I described it? It would be catastrofic if a thread would only be able to be executed upon times where the "main-thread" would be stalled. You seems to be very ignorance with this.

Do you even know what parallel means?

I dont get your example, it simple makes no sense.
The OS is the one scheduling the threads to the cores, reason why there is a scheduler in EACH core (Piledriver are using 3 per core).

SMT = Use unused resources. It is NOT only using the resources when the "main-thread" is stalled.

And you seems to be missing the point of how a general processor is working.


THERE ARE NO SUCH THING AS VIRTUAL CORES, it was mentioned to demostrate to people with a limited knowlegde on CPU architecture.
http://en.wikipedia.org/wiki/Simultaneous_multithreading

Call it logical cores instead if you dont like to call it threads, it atleast make some sense.
There is nothing more real about the "real-core" than the "virtual-core".





That certainly depends.
The 6300 would beat any i3s that are currently on the market in a pure instructions stream which is stretch out on all the cores. For the regular game it can vary.

The fx 8xxx was competing with SB and IV and was actually doing it quite succesfully.

We are talking true performance where everything is put to an test. The i3 will NOT be much better than a fx 6300 and would certainly not be competing with the 8350, that is bullshit.





Workstation card have special FPSIMD and the drivers are optimised for workstation work.
Still High-end gamer GPU > low-end (but just as expensive) workstation card
For even workstation work.

 


one thing to look out for with gigabyte boards is the VRM throttling issue, making OCing hard sometimes even if the cooler is totally capable (280MM Rad on the kraken x60 is wholly capable of 4.8 Ghz-5.0Ghz easy). Ive heard that while liquid coolers are much better for the CPU its new placement/configuration means less airflow over the VRMs (that an air cooler usually provides a bit of).
 


i was just thinking about getting the ud3 because of the color scheme, putting all this in a white h440 from nzxt, are the any other good 990fx/z87 (if ill buy the i5 now, and the i7 later, when it drops in price) with a black/white color scheme