Question Video render speeds with a GTX 1650 Super OC versus an RTX 3060 Ti ?

Status
Not open for further replies.

Anomaly_76

Prominent
Jan 14, 2024
170
13
585
I recently discovered that trying to make my 5900X / RTX3060ti gaming machine do everything is compromising performance on multiple planes, so I've moved unnecessary hardware to a more thermal / power-friendly machine for other everyday uses, such as web browsing, media, etc.

I'm putting a rather large DVD collection on PC, and some of them Handbrake cannot recognize or process, so one such use is recording DVDs with OBS, then trimming and exporting to MP4 files.

Featured is a RackChoice 2U ATX with B550 Aorus Master, 3600X, ID-Cooling IS-55, 2x16 Patriot Viper Blackout 4 DDR4-3220, and Asus Tuf Gaming GTX1650S-04-G (my previous gaming GPU). Got the machine running yesterday, and it runs great, but... I'm noticing a good deal of difference in performance when exporting the trimmed video to MP4. As in the 5900X / RTX3060ti would export a 50-minute TV episode in about 20-35 minutes at 30-60 fps, while the 3600X / GTX1650S setup looks to take about 20-30 minutes to export a 10-minute clip of similar quality at 30 fps.

I knew this setup would not likely perform as well as the gaming rig with half the cores, half the threads, and maybe 1/2 to 2/3 the GPU, but this is even with the 1650 boost-locked to 1650 Mhz. I split the baby between the base 1500 and 1800 boost speeds to keep thermals down in the 2U case. I did put it through its paces with a game, and it peaks around 75, my reason for limiting the boost lock, as I thought it would generate more heat than was necessary. I honestly don't think that extra 150 Mhz would help much anyway.

It's not a deal-breaker, as this one will be running 24/7 anyway, but is the GTX1650S really that much of a dog compared to the RTX3060ti for such tasks? I had noticed a speed difference in gaming, but I didn't think such tasks would be affected this drastically.

Thoughts?
 
GTX1650S (tu116) nvenc asic is 1 generation behind that of the RTX3060Ti (GA104)

Refer to this table and cross check with the codec you intend to format your video files

The relevance of a lot of the stuff you mention depends on whether you’re encoding with CPU or GPU/nvenc, and you haven’t specified
 
  • Like
Reactions: zinkles
GTX1650S (tu116) nvenc asic is 1 generation behind that of the RTX3060Ti (GA104)

Refer to this table and cross check with the codec you intend to format your video files

The relevance of a lot of the stuff you mention depends on whether you’re encoding with CPU or GPU/nvenc, and you haven’t specified
True...

OBS is set to use NVENC H.264 at 15000 kbps to record a 3840x2160 screen at 60 fps, using 36-sample Lanczos downscaling to 1920x1080 using FLV format (I've found that straight recording to MP4 produces lots of glitches). I then use VideoPad to export the trimmed / finished clips to 480p or 720p MP4 at 30-60 fps.
 
In my experience nvenc performance is much the same across different tier GPUs and performance is constrained my how fast the CPU and memory subsystem / storage interface can feed it. So it could be the difference in CPU cores, along with memory bus width, that account for differences.

Handbrake can be finicky with DVD rips and it might be worth trying a different software package that’s dedicated to ripping, rather than the more laborious process of OBS.
 
In my experience nvenc performance is much the same across different tier GPUs and performance is constrained my how fast the CPU and memory subsystem / storage interface can feed it. So it could be the difference in CPU cores, along with memory bus width, that account for differences.

Handbrake can be finicky with DVD rips and it might be worth trying a different software package that’s dedicated to ripping, rather than the more laborious process of OBS.
Wow. Didn't think the difference would quite be that great.

Unfortunately, Handbrake doesn't recognize a handful of my DVDs, which forces me to use the OBS method.
 
I did a LOT of h264 transcoding for a major project a few years ago, and swapped between a few different GPUs. Radeon RX580 was the worst. GTX1660S was incrementally faster than Quadro P400 with nvenc transcode, and neck and neck in speed with RTX3060 despite difference in generations and tier. (I actually ran dual GPU setup for a while with RX580 for primary display and Quadro solely for transcoding.) Going from a Ryzen 1600 to 3900X sped things up somewhat, but it wasn’t a dramatic difference.

This was with ffmpeg-based software (Shotcut), so I can’t say how that translates to OBS.

Edit. I suppose as an experiment you could swap your 1650S into the 5900X system and run a comparative encode to see what the difference in GPUs actually is.
 
Last edited:
I did a LOT of h264 transcoding for a major project a few years ago, and swapped between a few different GPUs. Radeon RX580 was the worst. GTX1660S was incrementally faster than Quadro P400 with nvenc transcode, and neck and neck in speed with RTX3060 despite difference in generations and tier. (I actually ran dual GPU setup for a while with RX580 for primary display and Quadro solely for transcoding.) Going from a Ryzen 1600 to 3900X sped things up somewhat, but it wasn’t a dramatic difference.

This was with ffmpeg-based software (Shotcut), so I can’t say how that translates to OBS.

Edit. I suppose as an experiment you could swap your 1650S into the 5900X system and run a comparative encode to see what the difference in GPUs actually is.
I seem to remember trying that for laughs. I can still hear the 5900X laughing. The 1650S is scarred for life. 🤣

In retrospect, as the real slow-down is in the final export using this method, perhaps there are multiple factors.

Fewer cores, fewer threads, lower speed (my 5900X has boosted to 5.0 at times, and sees 4.9 regularly), less VRAM, and less power consumption are probably all factors, but I'm sure it doesn't help that it's having to scale down a second time in that second process.

Perhaps scaling to 480p / 720p in the original recording would help, rather than add an extra step by forcing the video editor to rescale.
 
I was wondering about that initial 4K source sampling, and whether that was creating the performance problem. Several video packages consider 8GB VRAM as minimum for working with a 4K source, such as After Effects and Davinci. It’s creating a lot more work for the transcoder with little to gain, considering DVDs are standard definition.

Just out of curiosity, have you tried a DVD rip using VLC?
 
I was wondering about that initial 4K source sampling, and whether that was creating the performance problem. Several video packages consider 8GB VRAM as minimum for working with a 4K source, such as After Effects and Davinci. It’s creating a lot more work for the transcoder with little to gain, considering DVDs are standard definition.

Just out of curiosity, have you tried a DVD rip using VLC?

Perhaps it's inexperience with VLC's interface and options, but I have yet to get decent results with it.

Update. I found that even though I was isolating a specific 4-minute, 10-minute, etc. clip of a series in a 2-hour recording, the editor was still exporting the entire video, not just the desired clip. Once I corrected that, this setup is actually quite fast, about 3-5 minutes for a 20-minute video. Problem solved. :)
 
  • Like
Reactions: NedSmelly
Status
Not open for further replies.