Hi all,
I’m setting up a Quality-of-Service architecture and would like to hear your feedback on the Peplink implementation.
I mainly have three different kinds of traffic:
- Mission Critical traffic: defined by LAN IP, and/or remote IP, and/or TCP/UDP ports
- Business Critical traffic: same
- Best-effort traffic: same
It might be clear that Mission Critical traffic has the highest priority anytime anywhere, even starving out the Business Critical traffic when necessary. Business Critical traffic is of higher importance than Best Effort. A maximum allowed bandwidth can be defined for it. Best Effort is lowest priority, but can’t be neglected: we really need this traffic to be treated the best possible way in its priority class. These can be large file transfers (data sync) that may take longer, but that must be allowed to consume all available bandwidth except the reserved bandwidth for MC/BC.
Mission/Business critical traffic is sent in Speedfusion tunnels with FEC/WS settings. Best-effort traffic can be both: internet traffic that can be sent through the tunnel or local break-out.
WAN connections are Cellular, Starlink and WiFi WAN on offshore and inland vessels, so somewhat various depending on the location and availability. As setting the WAN connection bandwidth is important in order to make QoS priorities kick in, this can be quite complicated: when these at the normal inland expected bandwidths, QoS doesn’t kick in when the connection degrades due to increased distance right where QoS is most needed. When setting this lower, you don’t exploit the available bandwidth and restrict yourself too much. (I’ve noticed and tested that when setting a WAN interface to e.g. 2 Mbps you effectively can’t send more than 2 Mbps over that interface, it’s a hard limit and not an expected bandwidth)
If I’m correct, the Peplink scheduler on the WAN interfaces first takes the Speedfusion packets. (Yes, I have SpeedFusion VPN Traffic Optimization enabled) When there are no Speedfusion packets in the sent buffer, the other packets are taken and treated according to their QoS priority only if the bitrate of incoming packets in the sent buffer is bigger than the available upload bandwidth of the specified interface. Please correct me if I’m wrong.
Next, within a Speedfusion tunnel, the same happens.
But how is traffic in two different Speedfusion subtunnels treated?
In my setup, I have a separate Speedfusion subtunnel for these three traffic classes, where MC traffic has WAN Smoothing enabled, BC and BE traffic have adaptive FEC enabled. Next to that, there is also BE local internet break-out traffic which has, according to the scheduler implementation, lowest priority.
How do I make the distinction between MC and BC traffic and make sure MC always has highest priority and even starves out all the rest when necessary? Especially when only one healthy or saturated WAN remains, I don’t see how I can give priority to MC traffic.
Am I correct in my reasoning about the Peplink implementation? What could be your recommendations? Thanks!