Has anyone tried to get, say, 50-100 Mbps upload over multiple bonded Starlinks?
Do you think 100 Mbps upload would even be physically possible with ~6 dishes or would that many in one location just saturate the frequency band/spot beam?
Has anyone tried to get, say, 50-100 Mbps upload over multiple bonded Starlinks?
Do you think 100 Mbps upload would even be physically possible with ~6 dishes or would that many in one location just saturate the frequency band/spot beam?
Answer as always is - it depends.
We have seen 100MBps upload on a single Starlink HP dish before with SF Optimization.
I assumed it was due to the very limited saturation of dishes in the area and high levels of satellite transit / density. However where I could repeatedly demo that for weeks it now does not work and upload is limited to 22-25MBps. So either Starlink’s network optimisation / traffic management has changed (again), or they have redeployed the satellites (again) that were under utilised or they changed something else entirely and that continuous change is part of the challenge.
When we bond multiple Starlinks now we get mixed results depending on location of course (and so satellite and network saturation) but also based on inherent Starlink network characteristics.
Nobody talks about all the packet loss over Starlink and there is a ton of it.
The nature of the constellation and the dish moving it’s beam between satellites frequently combined with the 15s cycle of network routing updates (which can trigger the moves) means that the available bandwidth and latency varies continuously.
Here is a 40s upload TCP speedtest over PepVPN where you can see 3 satellite changes every 15s and with each change we have a different about of upload bandwidth and different latency:
If we look at the PepVPN graph we see what’s happening in more detail
So. When you have a single dish you will see massive variation in available upload bandwidth and latency over time. That in itself is a challenge. When you add a second dish and want to bond them together, the usual issues arise when you combine links which act ‘differently’ at any given point in time (even if they are the same type with the same characteristics overall) that you want to bond together to act as a single link.
One Starlink might be high latency when the other is sat at a lower latency, packet loss can be highly variable too and the outcome is that Speedfusion Dynamic Weighted bonding will often tie itself in a knot trying to favour the lowest latency, lowest packet loss paths.
This is because on other types of links if we see latency rise we assume congestion so DWB backs off on the link. If we see packet loss on a link we really don’t want to use it typically, and DWB treats it unfavourably too so traffic will favour the lowest packet loss link also.
To mitigate these things we will turn on ‘ignore packet loss’ in the tunnel and FEC when bonding just the Starlink connections. This gives us the best outcome. Jitter and packet loss in is reduced, the link is more stable and user experience of the connection is greatly improved. Hot failover happens neatly, downloads and upload complete faster because there are less retransmits of lost packets. Life is generally better.
But you don’t get the ‘most bandwidth’ that way. If you are bonding a pair of Starlinks only - you need more than 2 dishes to get stability AND bandwidth improvements. This gives you more than one alternate path and higher chance of getting traffic through.
I think 4 dishes would be good, 6 would be better. We had 12 dishes on a SDX back in the early days of testing but this was long before Dynamic Weighted Bonding so bonding results were poor.
By far the best way to get really reliable bandwidth is to combine Starlink and 5G actually. Different technology types, operators and networks provide resilience and the 5G can do a great job of boosting upload bandwidth when combined with a Starlink for download.
If you can’t get 5G (because you are in the middle of the ocean or the desert) then bonding Starlink alone still has significant value because it improves the overall connection quality and reliability that improves user experience noticeably.
So what I tend to recommend is real-time services and VIP traffic over bonded Starlink and general internet access load balanced over the links as the sweet sport of performance and reliability.
would you please share what settings you used on SF to achieve this upload?
many thanks