SDX / SDX Pro BGP Throughput

Hi - does anybody have info / real world experience regarding SDX and SDX Pro throughput with BGP? I’ve got a client with it’s own AS and IP block with two 10G transit providers. I’d love to use Peplink for this but can’t find any performance stats or anecdotes. Can it the SDX handle a full 10G of BGP throughput? How about the SDX Pro @ a full 20G? Default routes would be alright but would also be interested in default + 1 or full table performance.

Thanks,
Ryan

The protocol by which routes are learned should not have any impact on the throughput the appliance is capable of… :slight_smile:

That said, loading a pair of full IPv4 tables into an SDX/Pro may chew an unwanted amount of resources in terms of memory - maybe someone can comment who has actually done this as I’d be curious to know how it would handle it too… how long does it take to load a full IPv4 table into the RIB/FIB on an SDX?

I’d also be concerned about how many CPU cycles would be consumed by the BGP process if a session flaps with a full table or there is a lot of churn in the routing tables.

Not 100% sure but don’t believe there is any hardware offload for the data path in Peplinks world, everything is CPU bound so if the BGP process chewing up the CPU bringing up a session that could be very bad.

Finally, frankly from what I’ve seen the BGP implementation on Peplink is fairly basic and lacks the advanced features / knobs that I am used to in the more typical Cisco / Juniper boxes we have in that role in our network.

1 Like

Thanks for the insight, Will. My instinct was to look at Cisco / Juniper / Fortigate / Arista for the edge but was curious if anybody was successfully using Peplink with a good bit of throughput and/or full tables. Also good point regarding hardware offload.

Circling back to provide an update on this.

I advertised my IP block via BGP sessions with 2 transit providers using a pair of SDXs in HA for roughly a year. For the most part this worked as expected and was reliable. My 95th % traffic was less than 1Gbps consistently so this wasn’t too taxing / approaching the ceiling on the units. CPU usage varied from 3% - 30% depending on current traffic and number of active sessions. It should be noted that I was only taking default routes, not full tables. I expect if I tried to take full tables I would have had higher CPU usage and more issues. There were a few times when I needed to restart the unit to get fix odd connection issues downstream. The largest issue I found was latency would spike when traffic increased. I would typically see 3 - 6 latency spikes per day in the order of 2ms → ~50ms.

I have since moved to an ASIC based solution with a bunch of redundancies in place and no longer see any latency variation.

Overall I would say this worked well. I was able to utilize hardware I had on hand to get the job done. Configuration was certainly simpler than the new ASIC based BGP solution. If you have consistently higher traffic I might look at an ASIC solution first but if you aren’t going nuts on the connection it worked pretty well. Peplink is an x86 based solution so everything goes through the CPU. This was a good way for me to dip my toes into BGP while maintaining good metrics and visualization of what was happening. The Peplink GUI helped me see what current status / throughput was and helped me notice problems without needing specific command line experience. If you’re considering BGP / getting your own IP block the SDX / SDX Pro / EPX is a good option to get started with minimal BGP knowledge. Having the firewall and out bound policy routing available was additionally helpful.