So I stumbled upon a fix to a problem I didn’t know I had. Sharing info in case someone else sees the same thing in their SpeedFusion tunnel. I have an MTU of 1500 and 1430 on my 2 WANs. The SpeedFusion tunnel appears to let full size packets inside of it and fragments where needed. This was resulting in a high fragmentation rate on my WAN that had 1430. I could see this in the SpeedFusion graphing page. I changed a PC on my network to an MTU of 1350 (1430 minus the 80 for SF) and that worked for that one devices traffic, but I have lots of device on the network. After investigating lots of different things, I stumbled upon that changing the MSS on the FusionHub’s WAN port to 1310 (1350 minus 40) fixed all the fragmentation. Figured I’d share in case anyone else is seeing excessive fragmentation on their SpeedFusion charts.
Just a quick update to the numbers on this. Unencrypted SpeedFusion overhead appears to be 68 instead of 80. As determined through testing.
Link 1 - MTU 1500, MSS Auto(1460)
Link 2 - MTU 1430, MSS Auto(1390)
FusionHub WAN - MTU 1500, MSS 1322
SpeedFusion allows a full size 1500 packet, but via TCP headers shapes the packets towards 1362 via MSS. I still see 1 packet every few minutes get fragmented, but it gives you the best of all worlds. The connection will accept a full size packets and fragment when needed, but steers TCP clients towards the right size using MSS clamping. Here’s the math in case someone needs to adjust for their own use…
My Target: 1430 MTU
Tunnel overhead: 68 Unencrypted (change to 80 for encrypted)
Largest packet target: 1362
Largest ping: 1334 (28bytes IP overhead) (ping www.grc.com -f -l 1334) You have to watch the SpeedFusion chart while running this because the tunnel will allow a full size packet, so you can’t see the fragmentation in the ping window, you have to use the charts.
Largest MSS: 1322 (40bytes of TCP overhead)
This is really helpful! Feels like something @Steve might be able to turn into a button push in the future? A ‘mitigate Tunnel fragmentation’ button would be nice - even if all it does is test the WANs and spit out the right MSS value to manual config at the Fusionhub…
Hi @MartinLangmaid, actually this is not the expected behavior, SpeedFusion should handle MSS clamping correctly based on the MTU of both SF peers.@C_Metz has get in touch with me about this several days ago and I have confirmed this is a bug, from our internal tests last few days it’s only happening on KVM (my one is on Proxmox), for other hypervisors like ESXi / Hyper-V, they are not affected. Also this is affecting unencrypted tunnels only, if you have encryption turned on, you’re not affected as well.
The fix will be included in next firmware release (version number is likely 8.1.1 but TBC), but if anyone seeing MSS clamping issue on FusionHub, please submit a support ticket and we can help you with a special firmware.
I’m seeing this as well, tried turning on encryption as a test which reduced the fragmentation but my WANs combined are just shy of the 60Mbit/s or so the Balance 20x is rated to do for encrypted throughput and I’m also using the USB port which probably puts extra load on the CPU.
sorry to hijack this thread, but a small question about the MTU setting on a vm fusionhub installation on a server connected in a datacenter. is there any reason to not set the MTU value to the normal ethernet MTU value of 1500? the default value on the fusionhub wan setting is 1440.
Depends on your provider. AWS, Azure and Vultr can all be set at 1500. Some OS’s at Vultr require 1492. I’d set it at 1500 and look at the graphs to see if it’s fragmenting, then adjust if needed.
I run it under KVM on a dedicated server that is running debian10. Thx, i will test it.
Also if it is fragmenting it may be the issue discussed in this thread in which case support can provide a build that fixes it.
Not tested the new 8.1.1 beta yet to see if that has the issue.
thx, i will look into the new 8.1.1 beta fw