So I stumbled upon a fix to a problem I didn’t know I had. Sharing info in case someone else sees the same thing in their SpeedFusion tunnel. I have an MTU of 1500 and 1430 on my 2 WANs. The SpeedFusion tunnel appears to let full size packets inside of it and fragments where needed. This was resulting in a high fragmentation rate on my WAN that had 1430. I could see this in the SpeedFusion graphing page. I changed a PC on my network to an MTU of 1350 (1430 minus the 80 for SF) and that worked for that one devices traffic, but I have lots of device on the network. After investigating lots of different things, I stumbled upon that changing the MSS on the FusionHub’s WAN port to 1310 (1350 minus 40) fixed all the fragmentation. Figured I’d share in case anyone else is seeing excessive fragmentation on their SpeedFusion charts.
Just a quick update to the numbers on this. Unencrypted SpeedFusion overhead appears to be 68 instead of 80. As determined through testing.
Link 1 - MTU 1500, MSS Auto(1460)
Link 2 - MTU 1430, MSS Auto(1390)
FusionHub WAN - MTU 1500, MSS 1322
SpeedFusion allows a full size 1500 packet, but via TCP headers shapes the packets towards 1362 via MSS. I still see 1 packet every few minutes get fragmented, but it gives you the best of all worlds. The connection will accept a full size packets and fragment when needed, but steers TCP clients towards the right size using MSS clamping. Here’s the math in case someone needs to adjust for their own use…
My Target: 1430 MTU
Tunnel overhead: 68 Unencrypted (change to 80 for encrypted)
Largest packet target: 1362
Largest ping: 1334 (28bytes IP overhead) (ping www.grc.com -f -l 1334) You have to watch the SpeedFusion chart while running this because the tunnel will allow a full size packet, so you can’t see the fragmentation in the ping window, you have to use the charts.
Largest MSS: 1322 (40bytes of TCP overhead)
This is really helpful! Feels like something @Steve might be able to turn into a button push in the future? A ‘mitigate Tunnel fragmentation’ button would be nice - even if all it does is test the WANs and spit out the right MSS value to manual config at the Fusionhub…
Hi @MartinLangmaid, actually this is not the expected behavior, SpeedFusion should handle MSS clamping correctly based on the MTU of both SF peers.@C_Metz has get in touch with me about this several days ago and I have confirmed this is a bug, from our internal tests last few days it’s only happening on KVM (my one is on Proxmox), for other hypervisors like ESXi / Hyper-V, they are not affected. Also this is affecting unencrypted tunnels only, if you have encryption turned on, you’re not affected as well.
The fix will be included in next firmware release (version number is likely 8.1.1 but TBC), but if anyone seeing MSS clamping issue on FusionHub, please submit a support ticket and we can help you with a special firmware.