[8.2.0] SpeedFusion Dynamic Weighted Bonding Explained

Hello from the SpeedFusion team!

As Firmware 8.2.0 is no longer in beta, we would like to share some updates with you about the SpeedFusion bonding algorithm - Dynamic Weighted Bonding.

By default, when you create a new SpeedFusion profile, you’re still using the original Bonding algorithm, which is completely fine. However, if you feel like the bonding performance is not utilizing your WANs in full capacity, you can give Dynamic Weighted Bonding a try.

Starting from firmware 8.2.0, there will be a new table for “Traffic Distribution”, and the “Dynamic Weighted Bonding” option is right below the default “Bonding”.

One thing to note is that the policy itself only controls the bonding behavior for the upload direction, so it’s better to configure it on both peers. The exception is if you’re doing it for SpeedFusion Cloud since activating the policy will also automatically apply it to both upload and download directions.

To activate Dynamic Weighted Bonding for InControl 2, add or modify a SpeedFusion profile to see the Profile Options screen. Check the “Show advanced settings” box in the bottom left corner to display the SpeedFusion Mode setting. Lastly, select Dynamic Weighted Bonding to activate it.

OK, so what is Dynamic Weighted Bonding (DWB).

This is our latest effort to make bonding work better in the environment that the default bonding algorithm isn’t doing well; particularly when bonding multiple LTE connections with significant bufferbloat issues.

When talking about bufferbloat, it’s all about latency. If your WAN’s latency increases while sending data, the DWB algorithm will monitor it and reduce the percentage of packets going to this link to avoid it from becoming too congested.

DWB also identifies bad links. This means that when a link’s performance or stability is too poor to be a part of the bonded tunnel, no more user packets will be sent to that WAN-to-WAN link. This prevents the overall performance from being affected by a WAN that is not performing well, such as a moving mobile router with one of the LTE connections going to an area that has a weak signal.

Let’s see DWB in action. In the examples below I’ll try to simulate different scenarios and give an explanation on how it works.


Scenario 1) Extreme bufferbloat on ISP

ISP A - 20Mbps / 50ms idle latency (Red)
ISP B - 10Mbps / 40ms idle latency (Yellow)

Default Bonding on the left | Dynamic Weighted Bonding on the right


Default Bonding on the left | Dynamic Weighted Bonding on the right

Default Bonding: If you test it with TCP streams, the latency will keep increasing without losing a packet. However, the moment the buffer is exhausted on the ISP side, latency will reach the maximum value (in this scenario it’s about 2500 ms) and start dropping packets (the red triangle in the chart); and because latency is so high, TCP retransmission takes a long time to kick in so the overall throughput is lower and it fluctuates a lot.

Dynamic Weighted Bonding: This new algorithm takes care of bufferbloat by monitoring the real time latency, compares it to the latency when it’s idle, and drops packets before the latency gets too high. The packet loss can then be acknowledged by the sender as fast as possible and results in a stable overall throughput for the TCP streams.


Scenario 2) When a single connection is dropping packets

ISP A - 20Mbps / 50ms idle latency (Red)
ISP B - 10Mbps / 40ms idle latency (Yellow)
ISP C - 10Mbps / 40ms idle latency (Green)

In this scenario we are doing a 180 seconds TCP throughput test:

1st 60s: No packet loss
2nd 60s: Packet loss 2% on ISP C
3rd 60s: No packet loss

Default Bonding on the left | Dynamic Weighted Bonding on the right


Default Bonding on the left | Dynamic Weighted Bonding on the right

Default Bonding: When ISP C starts losing packets, the overall throughput is seriously affected and keeps decreasing from 35Mbps to 10Mbps. The default bonding algorithm will minimize the usage of this poorly performing WAN, but it won’t completely shut it down, so the packet loss cannot be avoided, and it will negatively affect overall performance.

Dynamic Weighted Bonding: With DWB’s ability to identify bad links, when ISP C starts to lose packets, the weight of this WAN will keep decreasing to a threshold until no more user data is sent to this WAN. From the sender’s perspective, there is no packet loss after DWB temporarily suspends ISP C. In addition, overall throughput drops just 9Mbps from 35Mbps to 26Mbps, in which 26Mbps is technically the bonded maximum of ISP A and B.

While ISP C is temporarily suspended by the DWB algorithm, it’s actually being used to send redundant data to monitor the WAN’s status so that when the link’s performance is good enough again (at 120 second of the test), ISP C can be re-bonded to the group (Green line), and overall throughput can return to the normal 35Mbps.


Scenario 3) Bonding a fast and a slow WAN

ISP A - 50Mbps / 50ms idle latency (Red)
ISP B - 5Mbps / 40ms idle latency (Yellow)

Default Bonding on the left | Dynamic Weighted Bonding on the right


Default Bonding on the left | Dynamic Weighted Bonding on the right

Default Bonding: The strategy for selecting a WAN to send out packets may congest a WAN that is much slower than the others. When throughput becomes high, a series of packets may burst to the slower WAN. Even though the bandwidth used is still within the WAN’s capacity, this burst of packets can congest the WAN for a short while. This congestion can generate higher latencies or packet loss, and affect the overall performance.

Dynamic Weighted Bonding: The redesigned strategy will ensure that there is no burst of packets to any single WAN-to-WAN link. A slow WAN can be bonded with a fast WAN without exceeding it’s capacity, and thus, utilize all of the available bandwidth in a much better way.


Scenario 4) Bonding a lot of WANs

Using 8x WANs in this example.

ISP A - 15Mbps / 50ms idle latency (Red)
ISP B - 20Mbps / 50ms idle latency (Yellow)
ISP C - 25Mbps / 50ms idle latency (Green)
ISP D - 30Mbps / 50ms idle latency (Purple)
ISP E - 15Mbps / 50ms idle latency (Blue)
ISP F - 20Mbps / 50ms idle latency (Pink)
ISP G - 25Mbps / 50ms idle latency (Light Green)
ISP H - 30Mbps / 50ms idle latency (Dark Red)

Default Bonding on the left | Dynamic Weighted Bonding on the right

Default Bonding on the left | Dynamic Weighted Bonding on the right

Default Bonding: When the number of WANs increase, we can fall into the same case as in scenario 3 because the overall throughput is much greater than a single WAN. This can occasionally congest the WAN and introduce packet loss. Therefore, the overall throughput will fluctuate a lot. In the test above, the overall throughput is 130Mbps.

Dynamic Weighted Bonding: With the new algorithm it’s very sensitive to latency change. Packets are carefully dispatched to each WAN without congesting them. You can see a very stable overall throughput in the screenshot above, and the overall throughput is 153Mbps.


Fine tuning Dynamic Weighted Bonding
As you can see in the Traffic Distribution table, there are 5 configurable options for Dynamic Weighted Bonding, here are how they works.

  1. Congestion Latency Level
    image.png

DWB treat a WAN as congested based on the latency change (compare to the idle latency), for example, when a WAN’s latency is 30ms, DWB by default will treat it as congested when latency is higher than 60ms. However this behavior may vary for different ISP, setting this option to Low will reduce the latency threshold (i.e. 45ms) and setting it to High will tolerate it for higher latency (i.e. 75ms).

If the above dynamic latency threshold doesn’t fit, you can also define a hard threshold for each WAN in the Cut-off Latency field. In the above screenshot, ISP B will be treated as congested when latency is higher than 120ms.

  1. Ignore Packet Loss Event
    image.png

If your WAN has a known packet loss problem and you don’t want DWB to treat it as congestion, you can enable this option. Any packet loss on that WAN will be treated as normal and DWB won’t reduce traffic sent to it.

  1. Disable Bufferbloat Handling
    image.png

DWB can monitor latency change and avoid bufferbloat issues by dropping packets before the latency gets worse. However for some scenarios you may prefer higher latency instead of packet loss, for example if you are doing video streaming, then enable this option will disable the behavior.

  1. Disable TCP ACK Optimization
    image.png

DWB can duplicate TCP ACK packets, which are small in size so they won’t consume a lot of bandwidth. However, duplicating them will guarantee a more reliable packet retransmission in a lossy tunnel. If you want to reduce this extra bandwidth consumption, you can enable this option then TCP ACK packets will not be duplicated.

  1. Packet Jitter Buffer
    image.png

This is a dynamic buffer that will try to reduce the packet jitter, default is 150ms which means a packet may be delayed for maximum 150ms (when required). Setting this value to 0 will disable the buffer. In the field trial we found that this buffer can help to improve bonding performance a lot when there are a lot of LTE/5G WANs combined (like more than 4 WANs), the default 150ms should be an optimized value when all WANs have similar latency, but if the latency difference is large (e.g. WAN1 is 30ms and WAN2 is 250ms), increasing this buffer may help to improve performance.

Note: modifying the value of “Packet Jitter Buffer” will disable “Receive Buffer”, these two features cannot be enabled at the same time.


What’s next.

Basically, the above examples should have covered the most obvious differences between Default Bonding and Dynamic Weighted Bonding. This doesn’t mean that DWB is going to replace Default Bonding. In actuality, there are some other scenarios that are not listed above that will have better results with Default Bonding, and I think that is the case for the majority of scenarios. In Firmware 8.2.0, Dynamic Weighted Bonding is worth a try if Default Bonding doesn’t perform as you expected.

Please share your experience with us so that we can make DWB better. More optimization is on the way, and if there are significant improvements ready, I’ll definitely share them with you and keep you posted.

15 Likes