Question Can a modern single core (with/without hyperthreading) handle recording 1080p gameplay

Dmarkojr

Reputable
Jun 30, 2016
14
0
4,520
1
Long story short: currently running i5 3470 (4c/4t) + gtx 970. Recording vr games with the 970 causes input lag (most things push it near 100% utilization already), and everything wants at least 4 cores.
That being said, I tested my cpu and on one core I can record up to 720p 30fps 5mbs (bitrate didn't affect cpu utilization too much) without dropping frames. Given the decent increase in single core performance since 2012, could a single core of a ryzen 5 3600 handle something closer to 1080p 60fps 10-15mbs? Would recording on 1 core/2 threads help any vs just 1c/1t?

Just curious as I plan on upgrading the whole system later in the year (after the new consoles come out, given gpu prices should fall a bit). I plan on an r5 3600, but if a decent 1080p recording needs 2c/4t of it, I'll go with a 3700x. I know capture cards exist, but I'd get use out of the 2 extra cores in blender.

Thank you for any suggestions
 

Dmarkojr

Reputable
Jun 30, 2016
14
0
4,520
1
The 3600 is a significant boost from the 3470 and also adds 2 real cores plus hyperthreading:
https://www.cpubenchmark.net/compare/AMD-Ryzen-5-3600-vs-Intel-i5-3470/3481vs822

Just imagine having 25% more cpu power and 2 more cores and I think you'll know the answer. ;)
Thanks for the response. Raw performance wise would tell me no. Just given the jump from 720p 30 to 720p 60/1080p 30 sends my cpu from a 45-70% single core utilization to 100% with dropped frames.
I searched up encoding benchmarks to see if I could find something that both cpu's for a reference point.
I found this: https://openbenchmarking.org/showdown/pts/x264
Didn't look into what settings they used beyond h264, but that doesn't matter too much. Going to do some quick not so accurate theoretical math for this:
So my cpu maxes out with dropped frames when doing any 1080p/720p 60, so I'll use the max of 70% utilization at 720p 30 for reference.
The benchmark shows the i5 3470 at 19.83 fps (I'm just going to refer to their fps as points for clarity later), r5 3600 at 71.55 points. Given the AMD opteron is right up there, I'm going to assume any and all cores are used.
To limit to single core (with or without hyperthreading), I'll divide the 19.83 points of the i5 by 4 (again, quick theoretical math assuming even scaling/performance among all cores). That leads to 4.9575 points per core of the i5 3470.
The r5 3600's 71.55 is divided by 6 for 11.925 points per 1c/2t.
Now I know that only 70% of my cpu is used (at max) when recording 720p 30 (assuming bitrate is kind of negligible as doubling it only added 2-3% usage). So to scale that according to the benchmark, 0.7*4.9575 points = 3.47025.
So we've established that in order to record 720p 30fps 5mbs, you'd need 3.47025 points of power.
So doing some basic scaling: 1280x720x30=27,648,000 can be achieved by 3.47025 points.
27,648,000/3.47025=7,967,149.341 per point
7,967,149.341*11.925 (r5 3600 per core) =95,008,255.889
95,008,255.889 can be encoded by the 3600 single core. Dividing this by a group of numbers gives the theoretical framerate and resolution it can handle based on my bs math.
So 95,008,255.889/(1920*1080)=45.8fps.
So in theory, no. 1c/2t of a 3600 couldn't handle 1080p 60fps recording. 30fps would be fine though. 720p 60fps shouldn't be an issue.
[Just inserting this here because this was a long train of thought, and I'm bound to make a stupid mistake. Feel free to call me out on killing a few brain cells]
 
Some great math there! And they say basic algebra and unit conversions are boring! Why didn't they have word problems like this rather than the two trains always colliding? haha.

So the only two things you should consider is that one, this is a single source of information and if their results are wrong/skewed/etc, so will your results, and two is that your analysis was quite linear and if there is any log scale events in the data, the linear analysis will miss it.

Overall though you've done a good job of analyzing this for sure. :) Kudos to some of the best online thinking I've seen on a forum. :D
 
Long story short: currently running i5 3470 (4c/4t) + gtx 970. Recording vr games with the 970 causes input lag (most things push it near 100% utilization already), and everything wants at least 4 cores.
https://www.anandtech.com/show/5871/intel-core-i5-3470-review-hd-2500-graphics-tested/2
You do realise that your CPU has video encoding as well right?
It's pretty old by now but I think that it should manage 1080p/60 pretty easily,just connect the mobo to a second input on your monitor.

Also never confine encoding to a single core,even if it works it's not a good idea you should let task manager handle the load balancing and only tweak priority settings to tell task manager what to prioritize.
 

mitch074

Distinguished
https://www.anandtech.com/show/5871/intel-core-i5-3470-review-hd-2500-graphics-tested/2
You do realise that your CPU has video encoding as well right?
It's pretty old by now but I think that it should manage 1080p/60 pretty easily,just connect the mobo to a second input on your monitor.

Also never confine encoding to a single core,even if it works it's not a good idea you should let task manager handle the load balancing and only tweak priority settings to tell task manager what to prioritize.
In general you would be right, however on something like video encoding core migration can add a delay because of cache flush and this causes dropped frames. This is even worse on AMD, because of ccx - both L2 and L3 caches need flushing then. And no, Windows scheduler is no good.
 
In general you would be right, however on something like video encoding core migration can add a delay because of cache flush and this causes dropped frames. This is even worse on AMD, because of ccx - both L2 and L3 caches need flushing then. And no, Windows scheduler is no good.
Core migration happens anyway unless you bother to lock down both the encoding and the game,because otherwise it is bound to be migrated to the core the encoding happens, to specific cores with affinity,and even then there is probably still core migration between all the cores you give to a task.

Locking encoding to a single core just means that anytime a background task is scheduled on this core you are going to get lower performance.
 

mitch074

Distinguished
Core migration happens anyway unless you bother to lock down both the encoding and the game,because otherwise it is bound to be migrated to the core the encoding happens, to specific cores with affinity,and even then there is probably still core migration between all the cores you give to a task.

Locking encoding to a single core just means that anytime a background task is scheduled on this core you are going to get lower performance.
Except if you also adjust its priority, and in all cases you at least avoid cache flushing.
 

ASK THE COMMUNITY

TRENDING THREADS