LTE: speedfusion is slower than the fastest link in the bonding?

@blade, 8.1.0 RC3 released!

Please let me know the result of the new algo, and to have more complete information, please use the “Export” button on the SpeedFusion Chart screen to save the PNG file, saving 2 different files from both local and remote devices will be better for me to compare the result, thanks!

Oh, and for the throughput test I’d suggest a longer duration (at least 30 seconds) to get a better understanding of how the network performs.

3 Likes

Hi @mystery, the new Traffic Distribution Policy - “Dynamic Weighted Bonding” is available for all devices with SpeedFusion Bonding capability. However for BR1 MK2 I remember it’s a Hot Failover device? In this case the new algo won’t apply, but one thing - the SpeedFusion Cloud, if you have SpeedFusion Cloud license we can activate bonding for the cloud connection, more details can be found here, we have a free trial tier as well:

1 Like

Thank you, @Steve! Here are the measurements - I set now the cutoff to 500 ms:

Dynamic Weighted Bonding:

Bonding:

I made some measurements. I think the download with weighted dynamic bonding is not satisfactory:

MK2 does Speedfusion hot failover and Smoothing. Is there a difference between enabling Dynamic Weighted Bonding for Speedfusion Cloud and Fusionhub Solo? It would be awesome if you can enable it for Fusionhub solo cloud connection. Otherwise, can you please, let me know the reason you can’t?

@blade, please leave all the parameters (e.g. cut-off latency) as empty, the new algo DWB doesn’t work like the default bonding algorithm, many parameters have a different meaning, we don’t have a new set of params designed for DWB yet that’s why we are still hiding it in the support.cgi page. Usually cut-off latency cannot be set too high and using default empty value usually give the best result. I’d suggest to leave all the params empty and test again.

And also, is it possible to enable Remote Assistance (RA) so we can check directly on your device? In this way we can help to fine tune and see whether we can further optimize the algorithm. After enabling RA, you can send me the Serial Number via forum message, or create a support ticket then I’ll pick it up, don’t post it publicly on the forum. Thanks :slight_smile:

2 Likes

@Steve, thanks a lot! I removed the cutoff values but it did not improve the download bandwidth yet. I create a ticket and enable remote assistance.

1 Like

Thanks @blade! I’ll follow up with you there in the ticket.

2 Likes

@Steve: Would you kindly let us know what was found if there’s’ a lesson for the rest of us?

3 Likes

@Rick-DC, this is for sure, but unfortunately last Friday when I have a chance to check the devices, it’s likely at peak hours and the total bandwidth measured by WAN Analysis (run plain on WAN, without SpeedFusion) is only ~75Mbps, and at that time SpeedFusion using the new Dynamic Weighted Bonding algorithm is able to achieve the same result - 75Mbps, which is working pretty well and didn’t have the problem.

So the problem may be only happens at off-peak hours.

According to blade, the off-peak hours should be 3am to 7am CET, but the setup will not be available for the next 3 weeks so we can’t continue the investigation before the setup is back. I’ll definitely update this thread when we have updates available.

2 Likes

Since it’s 3 weeks till you can test again, I’m running 4940 on FusionHub and 4942 on Max Transit. I setup a 3rd sub tunnel so I have these tunnels configured…

  1. WAN Smoothing all defaults
  2. Bonded, FEC Low, 65ms cutoff diff
  3. DWB all defaults

All use T-Mobile and AT&T as #1 priority
All tests 20 second downloads

WAN Analysis tool

DWB

Bonded

If there is a test I can run or data I can provide while you wait, please let me know, RA is already active for a separate issue related to the cellular modem.

DWB is definitely getting better, I saw it burst much higher than bonded. My take on them currently… if you goal is highest throughout… DWB wins… if the goal is balancing the traffic and trying to load up both pipes, than Bonded appears to be better for making sure both pipes are used.

3 Likes

Thanks @Legionetz! Can you enable RA for the remote FusionHub as well so I can get it and see?

2 Likes

@Steve Done and done :slight_smile:

3 Likes

@Legionetz, you are using MAX Transit and the maximum capacity of SpeedFusion throughput is 100Mbps, you are hitting the hardware limit of the device but I have still tried to modify some parameters and you can see it’s more constantly running @ ~100Mbps now.

  • Cut-off Latency is set to 120ms to allow the latency inflate a bit more before the bufferbloat prevention logic kicks in.
  • Enabled “QoS > DSL/Cable Optimization” to optimize TCP ACK handling.

5 Likes

Thank you very much. I really appreciate the efforts to give us every drop of performance from these devices. You and the rest of the Peplink team continue to go above and beyond. Thank you!

4 Likes

Dear All,

some update.
The remaining “missing” bandwidth compared to the 100 Mbit/s limit of 20X comes from # of parallel TCP streams. Left bar shows SpeedFusion with 4 streams, the right bar shows with 1 stream:

I learned that with real traffic (not the testing tool), even a single stream would reach the 100 Mbit/s.

Thanks a lot for @Steve for this final analysis and to everyone who helped me with advises.
Overall I pretty much got what this product promises and happy with it.

6 Likes

Thank you very much @blade. :+1:t2: As your 100 Mbps throughput is actually hitting the hardware limit of Balance 20X, I believe the result can be even better with a higher end models. This make me think about we should have an extra status on the PepVPN chart when the throughput is bound by hardware resources. I’ll think about it.

5 Likes

I don’t seem to be able to get the RC3 (now 4) beta firmware. The links don’t have the firmware present. I would like the additional Speedfusion features and settings. I belong to the beta program. Please advise.

the final version was released i thought?

Ah. I’m on 8.1.0. I must have gotten the numbers mixed in my head as I thought the beta was newer. Thanks.