I wanted to know why the load inst more evenly distributed? (these 3 links all go to the same place and have the same bandwidth 300/100mb). I see the downloading saturated at 300mb, with plenty left on the other links, and the upload always favors the non-downloading link ? is there some magic button that fixes this that ive not seen ?
What sort of links are they?
Use the WAN Analysis tool and test the RAW simultaneous point to point throughput between the Peplink appliances and check it’s not shared.
Which Peplink hardware are you using?
And have you tried Dynamic Weighted Bonding instead of normal Bonding? My experience is that the newer DWB algorithm has better performance, especially with more links and fluctuating latencies between links.
pppoe fiber, same operator, different strands of the same cable. its a 710 and a fusionhub.
i have tried dwb, which returned a similar result.
i ran the wan analysis, in the early hours of the night, but i didnt understand the result as it seems to think 1 link is 700+mb?
i did also try to add 2 more connections, to another operator, and i still see the favoring. i set these as download only because their upload speeds are insignificant.
the post before this one is great, because we can see as the link passess 200mb it starts to be lossy, and its at this point the other links are muchly needed, but effectively idle
Hi, @Daniel_Morgan
I can be wrong… but let me write a few things.
Some facts about Peplink with PPPoE.
- Peplink don’t have a dedicated network cpu to handle PPPoE encapsulation, so… the main cpu need to do the job. ( You can find some posts, about this, at the forum )
I like the explanation of this guy…
You wrote…
And
Can you look the cpu usage of your box? Maybe you have reach the maximum capacity, over PPPoE, and this is affecting your SpeedFusion throughput.
What we (peplink guys) don’t know…
Looking at the datasheet of each device…
We don’t have information about the cpu inside of each peplink device. (model, frequency, etc…)
Looking at the datasheet… bpl-710 (hw3)
Stateful Firewall Throughput: 2.5 Gbps
PepVPN Throughput: 400 Mbps
AND PPPoE!!! no words, about.
Can you share the hardware specs are you using?
also…
just check at your bpl-710:
- at each WAN: Upload Bandwidth and Download Bandwidth settings
- check at Dashboard > Status > WAN Quality > Connection
forgive my ignorance, but im not really understanding. i do realize the speedfusion is cpu intensive and that the spec rates it too 400mb, i am very much ok with this, its about the level of throughput im aiming for.
the qos values on the wan interfaces are all set to 1gb, but weve tried every combination of these.
the cpu looks like this, ive not seen it above 50% when bursting.
i did read the post, but my issue is about the uneven distribution of the traffic, even at low data rates.
To provide some more detail, the pppoe arrives on its own device, provided by the upstream. theres no pppoe happening inside the peplink. we nat the peplink on the modem. its cpu is less than 10% (peplink to modem is 1500). To add to that, the additional 2 links i added yesterday are dhcp provided, standard 1500mtu. these also go to the same provider. we have a lot of these, im pretty confident the bandwidth rates are what we pay them for.
i do also want to add, the traffic over this is all routed (no natting anywhere). i do understand that 1 host can saturate 1 link, but here, its many to many, and Im expecting after time it wouldnt favor a link? turning them off and on moves this favorite link, which i also found interesting.
when i use only 2 wan, it seems balanced?
Originally it was the plan to have 2 x 2, w/2nd pair as backup lines, but to test the actual capacity of the device, to confirm the 400mb sf throughput, I put the 3 fibers and then see this imbalance and all i can do is ask what to do next? if its stays more balanced like this, ok, i dont have any real issue; i have 2 active & 2 backup lines which is what im seeking.
Yes… It is balanced… You always will have something in the middle (internet). Things we cannot see… like, how many jumps from one node to another node. (traceroute)
So the issue is that you have packetloss when sending raw traffic over both fibers at the same time. When in a speedfusion tunnel, the algorithm by default favours the lowest latency paths with the least amount of packetloss.
To tune this, use Dynamic Weighted Bonding, set the congestion mechanism to latency only - not packetloss and latency, and set FEC to high. (disable TCP ramp up)
Then send loads of traffic over the tunnel. Pay very close attention to latency of each WAN as the next step is to fine tune the latency cut off per WAN to reduce the impact of buffer bloat.
FEC will help replace packets that are lost, Latency cut off should limit jitter / buffer bloat and I expect you’ll see 420 - 450Mbps of stable, very good quality bandwidth over Speedfusion. If you want more bandwidth lower FEC to medium but keep an eye on resultant packetloss in the tunnel.
Good luck!
apparently the magic button i was looking for was to ignore the packet loss event. thankyou.
i have set the dwb mode, couldnt find latency only mode (?); i set the level to high and enabled the fec (low) and disabled the tcp ramp up as you have suggested. this was still favoring, i then checked the ignore packet loss event flag and voiia, it looks like the graph (4am+). i am ok for it too be like this, have to wait for peak hour to be sure.
i have tried the latency cutoff many times without success (fuionhub has only 1 wan - 1 cutoff value). i didnt enable this again.
Glad you’re in a better place.
That is in incontrol when you configure the speedfusion tunnel there:
On the web UI is is a checkbox:
Don’t set it on the Fusionhub - latency in the datacenter should be reliable and you only have one WAN so it has no affect. Set it on the fiber WANs of the remote device, you’ll want to use it to stop speedfusion from using either WAN when its latency spikes. Consistent Latency and reduced packetloss due to FEC will keep TCP window sizes high and increase throughput and reliability.
Also make sure you set FEC at both ends, it only affects upload traffic of course…
If you open your SpeedFusion Graphs and send some traffic over Speedfusion and export and share the graphs here we can see if there is any further optimisation that might be possible.
the latency cutoff seemed to work only in 1 direction.
on these links the latency fluctuates within 5ms and when 1 does, they all do. you can see in all of the screenshots with traffic that the latency is always 109~115 regardless of its throughput.
great, its ok like this, actually its more than expected, with 500mb peaks.