Question Optimizing LAN for RTMP Streaming to nginx Server on same LAN?

basspig

Distinguished
Aug 7, 2007
73
0
18,640
I'm trying to improve the stability of OBS streaming to my server PC running nginx RTMP streaming setup as per Doug Johnson's instructions.

OBS PC is a Windows 10 Pro 21H2 i7-9700K with 64GB RAM and RTX3090. OBS is streaming at a conservative 2500kb/s to another PC on this LAN. All cables are CAT6 and under 10' in length. Server is inches away from the stream PC.

It goes through a Linksys WRT3200ACM router where it connects to a Kingwin fanless server running Windows 10 Pro, i5-3337U CPU @1.8Ghz with 8GB RAM. This server runs a parallel nginx server on port 8080 for RTMP streaming.

The setup works, but I find that OBS cannot maintain a steady stream if the rate is 2500kb/s. If I lower it drastically to blocky, pixellated data rates, it seems more reliable.

Oddly, I can multistream to the internet using Sora Yuki's Multiple Output plug in to stream to Facebook, Youtube, Twitch and Odysee simultaneously over my gigabit fiber connection without any problems. It's streaming to another PC on my internal LAN that is running into OBS stream rate dropping to zero. This happens with no pattern I can discern. Sometimes OBS will drop out 3 times in ten minutes. Sometimes it will go for hours without stalling.

Even more odd, is large file transfers between PCs on the LAN are fast--better than 90 megabytes/sec. It's just RTMP that can't handle a few kilobytes/sec.

Are there LAN adapter settings that can be changed to optimize streaming over in house LAN? I've experimented with things like jumbo frame, but they prevent OBS from even connecting to the nginx server. What settings in the LAN adapter can be adjusted to make RTMP streaming reliable at HD rates?
 
There really are no settings that will make a lan faster. You can not use jumbo frames unless every piece of equipment supports it and consumer routers do not. It doesn't make much difference until you get above 1gbit anyway.

I would leave all the settings on default. You file transfer show that the lan is fine, it should actually be a little over 100MBYTES but it could be the disk or some other kind of overhead.

Since it goes fine to the internet I would suspect the other pc has some issue since that is the only different part.

Other than that check for any software that claims to do QoS or favor one type of traffic over another on either machines. CFOSspeed is a common one bundled with the bloatware on motherboards and video cards. You want to uninstall any softeware like this.

If the problem happened more commonly it would be easier to troubleshoot. Maybe there is some message in the event viewer that happens about that time.

The brute force approach would be to run wireshark and let it decode the RTMP messages between the devices. Problem is you are going to get massive capture files when it happens so seldom. You can set the buffer size lower if you are fast enough to stop wireshark when it happens.
If there is a RTMP problem you should see some kind of error, at least you might figure out which end is having the issue
 
  • Like
Reactions: basspig

basspig

Distinguished
Aug 7, 2007
73
0
18,640
I'm going to install Wireshark on the server and log activity.

I've been playing with larger TX and RX buffer size on the adapter. That seemed to help, as I haven't had OBS stall since making the changes.

Jumbo frame was turned on when I discovered a few weeks back that the speed tests were only about half my fiber throughput. Enabled Jumbo frame and my speed test numbers increased from around 500mb/s to 971mb/s.

I was doing some "extreme" testing to see if I could break the system faster, and cranked OBS bitrate to 52mb/s. I watched Task Manager wit the Ethernet tab selected and noticed a pretty steady 52mbs outbound, but the receive (watching the stream on this same PC, so expecting data to be coming back) rate was in the 40kb/s range with occasional spikes to 133mbs. What might be going on there? The send data graph is nearly a flat line. The receive is a series of tall spikes spaced apart several seconds.
 
Not sure why jumbo frames makes any difference to speedtest. Even if all your equipment supported it the internet does not allow frames larger than 1500 to pass and it is actually a bit less since mac addresses take up some space. Note just setting the jumbo frames option likely does nothing unless you also change the MTU size. Problem is if you try to send say a 5000 byte packet on a 1500 byte connection it will either be dropped or fragmented. Packet fragmentation will greatly increase the cpu load on the receiving machine to reassemble it. But it all doesn't matter you router lan port will likely not accept any packet bigger than 1500.

You actually likely want smaller buffer size for your application. Most times this is a hardware thing you can't change since it is part of the ethernet chipset themselves. Large buffer size it to reduce the amount of data loss when you are running very large data transfers. Your application would actually prefer to lose data rather than buffer it behind say a file transfer that was running at the same time. It likely though makes no difference you seldom see a gigabit port maxed out so nothing is being buffered anyway.

The simplest test between 2 machines in your house is a old line mode program called IPERF. It does not use disk and extremely small amounts of cpu and memory. It pretty much just tests the hardware and network drivers. You should see well over 900mbps in both directions.
 

basspig

Distinguished
Aug 7, 2007
73
0
18,640
Hmm.. that runs counter to intuition.. one would think a larger buffer reduces data loss. The Intel LAN info page says the only downside is it uses more memory, but may improve network performance.

Okay on the internet MTU. I do recall that now that you brought to my attention. I'm guessing it was a coincidence that speed test was poor before enabling Jumbo frames. I disabled that feature and Speedtest is about the same now. I swapped out the LAN cable for a certified CAT6 cable and now my upload speeds start around 110 mbs and ramp up to about 899 mbs. Seems like it did better (979) with the old cable, which might have been CAT5.

I tried IPERF and ran the executable on my server, but it does not seem to run. Can't see a window nor anything new in Taskman running.
 

basspig

Distinguished
Aug 7, 2007
73
0
18,640
Okay, figured out that this is a command line program and then had to research how to use.. got the following results:

[ 5] 9.00-10.00 sec 112 MBytes 937 Mbits/sec
[ 5] 10.00-11.00 sec 112 MBytes 940 Mbits/sec
[ 5] 11.00-12.00 sec 112 MBytes 940 Mbits/sec
[ 5] 12.00-13.00 sec 112 MBytes 940 Mbits/sec
[ 5] 13.00-14.00 sec 112 MBytes 940 Mbits/sec
[ 5] 14.00-15.00 sec 112 MBytes 940 Mbits/sec
[ 5] 15.00-16.00 sec 112 MBytes 940 Mbits/sec
[ 5] 16.00-17.00 sec 112 MBytes 940 Mbits/sec
[ 5] 17.00-18.00 sec 112 MBytes 940 Mbits/sec
[ 5] 18.00-19.00 sec 112 MBytes 940 Mbits/sec
[ 5] 19.00-20.00 sec 112 MBytes 940 Mbits/sec
[ 5] 20.00-21.00 sec 112 MBytes 940 Mbits/sec
[ 5] 21.00-22.00 sec 112 MBytes 940 Mbits/sec
[ 5] 22.00-23.00 sec 112 MBytes 940 Mbits/sec
[ 5] 23.00-24.00 sec 112 MBytes 940 Mbits/sec
[ 5] 24.00-25.00 sec 112 MBytes 940 Mbits/sec
[ 5] 25.00-26.00 sec 112 MBytes 940 Mbits/sec
[ 5] 26.00-27.00 sec 112 MBytes 940 Mbits/sec
[ 5] 27.00-28.00 sec 112 MBytes 940 Mbits/sec
[ 5] 28.00-29.00 sec 112 MBytes 940 Mbits/sec
[ 5] 29.00-30.00 sec 112 MBytes 940 Mbits/sec
[ 5] 30.00-30.02 sec 2.17 MBytes 926 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-30.02 sec 0.00 Bytes 0.00 bits/sec sender
[ 5] 0.00-30.02 sec 3.28 GBytes 938 Mbits/sec receiver
-----------------------------------------------------------
Server listening on 5201
----------------------------------------------
 
First the buffers are only used if the port is at 100% which is highly unlikely on a gigabit port.

This is a variation of the standard bufferbloat issue but bufferbloat is generally in a router that is connected to a connection that is less that 1gbit.

The buffer do reduce data loss BUT that is at the expense of delaying the data by holding it in a buffer. For a file transfer data loss causes it to go into a fall back mode and the server drop the transfer speed a bit and then ramps it back up. To avoid this the data is buffered.

So for data transfers it is a good thing. For streaming video or online games espeically they do not like date being delayed. Unlike file transfers these applications do not resend the data they just skip the lost part. For video you will not even notice a frame loss here or there. If you get a bunch of frames in a row delayed that you tend to see on live streaming. Things like youtube or netflix use large preload buffers to hide this.

In any case it appears your lan works fine. This means it has to be some software that is somehow limiting the traffic or maybe the application function differently when it run on a lan rather than the internet. Pretty much when I was working I did pure networking so I can't tell you much about how windows application function.
 
  • Like
Reactions: basspig

Ralston18

Titan
Moderator
Also try Resource Monitor to observe System Performance much like you did with Task Manager.

Just repeat the streaming and file transfer tests again with Resource Monitor.

In Resource Monitor watch what resources are being used, to what extent (%), when changes occur, and what processes/apps are involved.

Process Explorer (Microsoft, free) may also help discover what is going astray.

https://learn.microsoft.com/en-us/sysinternals/downloads/process-explorer
 
  • Like
Reactions: basspig

basspig

Distinguished
Aug 7, 2007
73
0
18,640
I ran wireshark and captured a good amount of data. There are "malformed packets" showing up in the RTMP protocol.

As for resources, about 12-22% CPU. Most of the time, the server CPU idles down to .83GHz.

Just switched the power plan from balanced to high performance. That bumped the CPU clock up to 2.3GHz.

UPDATE: Turning up the clock speed did nothing. OBS still going RED and dropping frames streaming to this server.



https://drive.google.com/file/d/1Pu4hXd8mRmaJIjHWlEZPWiuQlgNEo-Rn/view?usp=sharing
 
Last edited:

basspig

Distinguished
Aug 7, 2007
73
0
18,640
What is really strange is that OBS 'senses' the user connection to nginx--when it drops to zero and the playlist files disappear in the /HLS folder, playback at the browswer stops. But if I refresh the browswer, OBS starts sending stream to nginx again. I thought OBS was streaming only to nginx, not the end user, so why is end user stimulating OBS to start streaming after a stall?
 
Hmm.. that runs counter to intuition.. one would think a larger buffer reduces data loss. The Intel LAN info page says the only downside is it uses more memory, but may improve network performance.

Okay on the internet MTU. I do recall that now that you brought to my attention. I'm guessing it was a coincidence that speed test was poor before enabling Jumbo frames. I disabled that feature and Speedtest is about the same now. I swapped out the LAN cable for a certified CAT6 cable and now my upload speeds start around 110 mbs and ramp up to about 899 mbs. Seems like it did better (979) with the old cable, which might have been CAT5.

I tried IPERF and ran the executable on my server, but it does not seem to run. Can't see a window nor anything new in Taskman running.

For highly congested networks this research team discovered smaller ones are better than the older calculation. Might not translate into home networks as I doubt there's studies for it. It was a big deal because router costs were able to go down quite a bit from not needing extra memory chips on the board for buffer.

https://dl.acm.org/doi/10.1145/1015467.1015499
 
  • Like
Reactions: basspig

basspig

Distinguished
Aug 7, 2007
73
0
18,640
For highly congested networks this research team discovered smaller ones are better than the older calculation. Might not translate into home networks as I doubt there's studies for it. It was a big deal because router costs were able to go down quite a bit from not needing extra memory chips on the board for buffer.

https://dl.acm.org/doi/10.1145/1015467.1015499

But my network has little traffic.

Someone in the OBS community suggested testing with monaserver. So I tried it out. OBS seems to have no difficulty streaming to monaserver (only problem is no HLS support so I can't stream to web browsers). But I was able to crank OBS bitrate to 92 megabits/sec and no dropped frames. The network passes every test I can throw at it.
But nginx seems to have trouble handling even 2000kb/s stream rate. Must be a problem with the Griffin release of nginx.