Outbound Policy for uneven WANs

I have 2 WANs:

  • Fiber internet (25 mbps down / 25 mbps up)
  • Cable internet (500 mbps down/ 20 mbps up)

Generally, I want most traffic to be using the Cable, since I run servers on the Fiber internet and I don’t want to saturate it.

However, I was doing a big upload today via SFTP, and I noticed my web browsing slowing to a crawl. It looks like by saturating the Cable modem upload, I was causing trouble on the download.

Right now I’m using the Least Used for the Default Outbound Policy, but I realized this is probably not what I want, since it watches the download bandwidth, apparently ignoring the upload.

I want a rule which says “Prefer the cable modem connection, but if it gets slow, then use the Fiber connection instead”.

Here are the rules I can choose from:

Weighted Balance - Traffic will be proportionally distributed among available WAN connections according to the specified load distribution weight.

Persistence - Traffic coming from the same machine will be persistently routed through the same WAN connection.

Enforced - Traffic will be routed through the specified connection regardless of the connection’s health status.

Priority - Traffic will be routed through the healthy connection that has the highest priority.

Overflow - Traffic will be routed through the healthy WAN connection that has the highest priority and is not in full load. When this connection gets saturated, new sessions will be routed to the next healthy WAN connection that is not in full load.

Least Used - Traffic will be routed through the healthy WAN connection that is selected in the field Connection and has the most available downlink bandwidth.

Lowest Latency - Latency checking packets will be periodically sent to all selected healthy connections. Latency will then be determined by the response time of the second and third hops. New traffic will then be routed to a healthy connection with the lowest average latency during that detection period.

Fastest Response Time - Traffic will be duplicated and sent to all selected healthy connections. The connection with the earliest response will be used to send all further traffic from the session for the fastest possible response time. If there are any slower responses received from other connection afterwards, they will be discarded. As a result, this algorithm selects the most responsive connection on a per session basis.

Lowest Latency might sound good, but since the fiber almost always has better latency (under 10ms) than Cable (20-40ms) it would use the fiber by default almost always.

Overflow might work, but it depends how “is not in full load” is defined - is this upload bandwidth or download?

Edit to add: it looks like it’s download: “When this connection gets saturated (95% of defined download bandwidth), new sessions will be routed to the next healthy WAN connection that is not in full load.” (source)

Ideas?

I have yet to solve this issue, but I’ve now updated to Firmware 8.4 which has a new option:

Least Used:
• By Downlink
• By Uplink

I tried setting it to By Uplink, but I’m still not happy with the behavior: since the Fiber has more uplink (25) than the cable (20) this means that when idle, the connections default to the Fiber, which is not what I desire either.

I think what I want is an Overflow setting which is triggered off Uplink rather than Downlink.

Any ideas?

If I choose Least Used / By Uplink the problem is that my Fiber connection has more uplink (25) than my Cable (20), and so the Fiber becomes the default, which I don’t want.

I have an idea: The uplink bandwidth values are under my control (set under Network / WAN / [wan name] / WAN Connection Settings.

What if I were to lie to the Peplink about the upload bandwidth?

If I set the uplink bandwidth as follows:

WAN 1 (Fiber) uplink bandwidth = 5 (actual = 25)
WAN 2 (Cable) uplink bandwidth = 20 (actual = 20)

In theory:

  • under light or zero loads, WAN 2 will be chosen since it has more uplink bandwidth (even though that’s technically a lie).
  • once WAN 2 uplink gets over 15mbps (about 75%) then WAN 1 will appear to have more uplink available, and WAN 1 will be used.

I think this might solve my problem, though I’m not sure if there are any issues with intentionally lowering the upload bandwidth value for one WAN.

Interesting, I just did a test of this:

  • did settings as shown
  • start a large FTP upload. Result: upload goes out on WAN 2 (good)
  • start another large FTP upload. Result: upload goes out on WAN 1 (good).

Problem:

  • the WAN2 upload is capped at 5mbps (which is the incorrect value I set for WAN 2 uplink bandwidth)

image.png

1 Like

Another test: this time I turned OFF the DSL/Cable optimization setting:

image.png

And now I’m getting closer to full bandwidth on both devices:

image.png

1 Like

per my experience, the download bandwidth is not “watched”

the router actually caps or throttles down the bandwidth as to not exceed the download (or upload) bandwidth.

I initially assumed that the bandwidth values are just used in the “least used” algorithm to decided how to route

i.e I assumed that just like weighted algorithm that uses a slider for say 10:7 weight, I assumed that least used would use a 500:25 ratio

I have Starlink as my WAN1 and the download speeds are all over the place, anywhere from 20mbps to 500mbps, with an avg of about 80

so I figured that if I set the down bandwidth to 80 for Starlink and to 50 for my Verizon Home Internet, I’ll get a ratio of 80:50

but in reality, the router capped my Starlink, and no speed test would get more than 80.

I then lower the download bandwidth to 10 and indeed all speed tests were capped at 10!

so now I have the bandwidth for Starlink at 1000, which is way too optimistic, and Verizon at 200 to allow for bursting, and I no longer use “least used”

I also do not use lowest latency, since Starlink always have the lowest latency

I ended up deciding on “fastest response” which is not ideal, and now that I think about it, I should probably use “weighted” and set my own ratio.

Peplink should warn users that they should NOT set bandwidth to a lower value

but best would be if, just like latency is watched and computed, that the bandwidth should be watched and automagically computed in real time