Dear Peplink Friends:
I would like to share the following SpeedFusion tunnel scenario using Peplink equipment, and request your support to clarify some performance differences observed during testing.
Scenario 1 – On-Premises SDX Hub with 10Gbps WAN
- A Peplink Balance SDX (hub) is connected to a 10Gbps WAN interface with a public IP address.
- This SDX is configured with an unencrypted SpeedFusion tunnel (according to datasheet, supports up to 1Gbps throughput in this mode).
- A remote SDX device is configured with 5 Starlink WANs, each set to DHCP, and bonded through the SpeedFusion tunnel toward the main SDX hub.
- When performing NPERF tests from the LAN of the remote SDX, the maximum throughput achieved through the tunnel was around 300 Mbps total using all 5 Starlink connections.
- However, when testing each individual Starlink WAN outside of the tunnel, each provides around 250 Mbps of download throughput.
- The SpeedFusion tunnel is set to unencrypted with dynamic bonding, specifically to maximize bandwidth.
Scenario 2 – FusionHub in AWS and GCP (Chile Region)
- A FusionHub instance was deployed in AWS and another in Google Cloud (GCP), both hosted in Chile.
- The same 5 remote Starlink WANs were bonded toward the FusionHub instances.
- In this case, NPERF testing on the remote SDX LAN showed 650 Mbps of download throughput, indicating successful bonding toward the cloud FusionHub.
Scenario 3 – 2 WAN Fiber Lines on Remote SDX
- A separate test was performed using two fiber WAN connections on the remote SDX.
- The bonded traffic reached 500 Mbps (250 Mbps per WAN), going toward the same SDX hub with 10Gbps WAN and public IP.
- This test confirms that the SDX hub is capable of handling high throughput when multiple WANs are used.
Scenario 4 – 4 Starlinks Routed to 4 Separate Public IPs on SDX Hub
- Four Starlink WANs were configured, each mapped to a different public IP on the SDX hub using 4 separate WAN interfaces (2x SFP+, 2x 1GE).
- A single SpeedFusion tunnel was created with the following mapping:
WAN1 → Starlink1, WAN2 → Starlink2, etc. - NPERF testing from the LAN side of the remote SDX reached a peak of 540 Mbps, which shows better aggregation.
Main Question:
Why does Scenario 1 (5 Starlinks bonded toward a single public IP on the SDX hub) result in such limited performance (~300 Mbps total), even though each Starlink individually performs at ~250 Mbps, and traffic routes through Chile gateways?
Why does this bottleneck disappear when using FusionHub in AWS or GCP, despite the SDX hub having the capacity to support this bandwidth?
Request:
I would appreciate your guidance and technical input to help us provide a clear explanation to the client and prepare the final report with proper conclusions and recommendations.
Thank you very much in advance for your support.
Best regards,
Juan Muñoz Franco
Acanto Chile
+56998861406