Any gamers in here?

I am just throwing this against the wall to see what sticks…

I have played video games all my life. I consider myself just a tad bit higher than average at most games, but my internet connection has always been a huge disadvantage ever since online gaming became a thing.

I imagine if there are any gamers amongst us, we probably face the same challenges, and while Peplink doesn’t advertise any “gaming” features, I believe that they have given us all the tools to optimize gaming traffic. I am getting older and I need any advantage I can get.

I am currently playing Apex Legends on PC. For the most part it works alright, but this game does some goofy stuff to the PC and network. I have run several packet captures and I am still scratching my head. At the end of every match, you leave the game and return to the lobby. Whenever I am returning to the lobby, party chat on XBox has issues. I don’t think that it is congestion based on the traffic graphs - but it is very consistent. When using discord for voice, it doesn’t crap out. Another issue is that the matchmaking bounces me around onto sub-optimal game servers.

I have tried to use packet capture software to help diagnose the issue, but I am still struggling. I can’t track the workflow, nor the “conflict”. It is almost like the game is trying to switch to a udp port that is in use by the party chat. They both seem to use ephemeral ports.

For the matchmaking, it is getting the list of available game servers through some mechanism other than DNS, I haven’t been able to match a DNS response address to the game server address. It seems to do a latency check at some time after the game and before you join a new one. It seems that “something” is causing latency to spike - and then the game just picks one at random. I don’t know how it is calculating this latency, so I don’t know what QOS settings that will make it better.

Any help is appreciated. I mainly just want to get a discussion going if anyone else is interested in such things.

One thing that has helped with the game play was changing the WAN buffer size to a smaller value. I chose 512, then lowered it to 256, then I tried 128, but I started getting a bit of packet loss, so I left it at 256.

Another tweak that has helped was to route all tcp out the wan with higher capacity, and only routing UDP down the tiny DSL link. My DSL link is much more consistent with latency, although it has higher latency. My other WAN is a WISP and the jitter is so bad, that I prefer a consistent high latency over a sporadic latency. Since both the voice chat and the game data are using UDP - I am sending them both down the same WAN, ideally I will find a way to identify what ports or port ranges each is using and then routing them separately. Unfortunately, they both appear to be so dynamic that I can’t find any static ports.

I thought UDP was supposed to be easier to deal with than TCP, but I am finding that it is harder to diagnose and troubleshoot.

Have you considered hosting your own FusionHub in the cloud at somewhere like Vultr and using WAN smoothing to duplicate the traffic across multiple links?

I was playing The Division 2 and using the setup I describe above, but now I’m playing Overwatch… so now I just use the least latency algorithm and the game seems to be good at compensating for changing link dynamics.

1 Like

i use fastest response alogorithm and also toggle video streaming QoS for low, and online gaming as high. my ATT is around 27ms, vs my sprint which is 45-50ms and usually ATT wins. throughput isn’t as good though, so large downloads suffer some.

I have, and I set one up on AWS for testing this exact use case. It just didn’t work out for me for some reason. However, I recently started dorking around with the WAN buffer size, and I have seen major improvements to performance; so maybe that would have made the difference. All of my trial licenses are up, so maybe when I replace my Balance One in a couple of months – I will get another evaluation license to try again. In theory, it should do exactly what I am wanting. But, one pipe is too small for full duplication, and the other has too much jitter. I guess you just can’t make one great connection out of three crap connections.

Division 2 was really fun, and then I tried to go into the PVP areas of the map. It got less fun at an astounding rate. :slight_smile: I classify that type of game as the “Grinding” genre. Always grinding for loot – more grinding, more/better loot, tougher enemies/challenges, repeat.

Overwatch is a very lively looking game, but since it has been around forever – it has a very steep learning curve – that meta is so deep. I imagine I would spend the whole game going “WTF just hit me?” and “which one of ya’ll just kicked me?”. I do like the team-based meta and the pace of the action seems really fun.

My problem is that my “fastest response” will always be my WISP link, but the jitter is awful. It will go from 30-40ms to 210 and back every 20-30 seconds or so. They have an oversubscription model and heavily shape the traffic going through their transmitters. It basically halts some traffic and lets others through to make the average throughput match their configured profiles.

Now, if there was every an option for “least hops”, that would be kind of cool – maybe.

Another option that I was thinking about is a DNS query response “massager”. For example, if I say - “matchmaking.gameserver.com” should only ever return two servers to my PC. A normal response would contain all 8 possible IP addresses. I want the router to measure the latency to each server in that list and pick the two fastest and strip the other 6 out. The problem that I am running into is that Apex Legends in particular doesn’t seem to get the list of game server IPs through DNS – I think it is using some encrypted data transfer to pass that data to the client. And then it uses some seemingly random port number (not really, they are all in the 30,000-40,000 range - usually ending in 10 – 30710, 31510, etc.) The best I have been able to do to isolate that traffic is route the entire range for UDP.

The other issue is that the way the games are determining which server is “closest” is actually causing latency spikes. Sending ping packets to 100 IPs concurrently on a 1Mbps DSL uplink causes all kinds of wacky results.

I also found that XBox Party Chat is difficult to “identify”. It seems to go based on the source port of the initiating socket. And it uses an ephemeral port. I haven’t found a way to limit it to a range or specific port number.

I will say D2 was great until I spent 3 months with a perfected build and was grinding for no reason… then I started searching for a new game. Overwatch was a steep learning curve, but it is good for my aging brain… lol. I took no time at all to learn one of the easy tank characters and one of the easy healers… It’s taking me some time to pick up the other 30 characters… you are right about the steep learning curve.

As for FusionHub, I’ll say… I didn’t like AWS data center options because I’m in Texas and they are on each coast. I then rolled out a FH at Azure in San Antonio… I liked the performance, but the cost of bandwidth was high. Then Martin on this forum recommended Vultr in Dallas and it was cheaper with lower latency, so that’s where my FH ended up. I find that only sending key traffic to the FH is best and keep the noisy traffic headed directly to the internet. I’m using 3 links to talk to my FH, so I’m able to achieve an extremely stable 33ms when using WAN Smoothing. Any who I mention all this because if you end up back on the FH way of solving issues, you may want to try different data centers.

These are great suggestions! I am glad that I am not the only one that is using these devices to help support gaming as well as basic connectivity.

Have you guys changed your WAN buffer sizes at all? If so, what techniques did you use to determine the proper size? Windows 10 default buffer sizes are 256 on the receive and 512 on the send. I don’t know if Windows adjust these sizes based on performance metrics or not - these are just the values that I can see in the network adapter driver.

One thing that is becoming clear is that UDP is tougher to analyze than TCP. For whatever reason, source and destination seem to get blurry. I get that it is connection-less, but someone sent the first packet – so, they should be the “source”, right? When doing a windows packet capture – windows is unable to tie UDP traffic back to a process name – it is always under the “unknown” or “not available” category. Basically, trying to find characteristics of the underlying connections for the purposes of routing is difficult. Some game titles are consistent – call of duty always uses UDP port 3074-3076 is an example. But, Apex Legends uses a different strategy - and trial and error seems to be the only way to reliably come up with routing rules.

My end goal is to try to limit the traffic on my DSL link to ONLY the game traffic and the audio chat traffic (discord, xbox game chat, in game chat, etc.). Splitting traffic based on UDP or TCP works pretty well, but there is a lot of UDP traffic that isn’t part of the game data or chat data streams – so, I would like to get a bit more granular with my routing rules. So, I start up a packet capture, launch the game, load into a lobby - play for a minute or so – quit the game, and then start to look at the data. I am finding that the game seems to be broken up into separate audio/video streams. They seem to have some kind of identification/specialization that my packet capture parsers are able to evaluate. Is there a way to leverage these identifiers for the purposes of traffic routing? Are these values what the DPI is using to determine whether traffic is part of the “All supported video streaming”?

Haven’t messed with buffer size at all. Touched it once…, things broke… undid the change and thought to myself… well I’m not touching that again… lol

Think of UDP as 2 one way conversations controlled by the application layer. Both people are sources and destinations and since the application layer is in control, the OS layer claims ignorance.

I use the active session feature inside Peplink to monitor most things… much easier than a sniffer trace / wire shark capture.

Source destination ports in custom rules in the Peplink outbound policy will help you only send the traffic you want down each link. DPI is bad for these purposes because it has a first packet problem… meaning the conversation starts on WAN 1… then DPI identifies it as video and attempts to move it to WAN 2… after the conversation has already started… not ideal when gaming. I like to stick with things it can see in the packet header… IP address and ports.

When parsing Division 2 traffic I found the actual game movements were sent on port 443… so while I had identified tons of other UDP ports the game was using like 50000, etc… in the end IP address and 443 outbound were as granular as I could get for that particular game… which is frustrating. I actually played with and had some success with domain based rules where I took all traffic going to ubi.com and sent it down a link… but ultimately I gave up on that… it did work well though. That might be something you want to try.

Thanks for the tips and I am coming to similar conclusions.

I “think” I discovered a bit of a pattern - the port chosen seems to be the ephemeral port (source port) of some other https connection. So, in a way - my PC is actually picking the port at runtime. I say “think” because I haven’t really found a “great” packet analysis tool. I need something that helps me identify traffic patterns with some kind of layer7 knowledge as well. The active session window is great, but it only shows you active sessions - quick connect/disconnect stuff is lost quickly.

I originally was hoping to use some combination of Network Monitor 3.4, the active sessions window, the firewall logs (from both routers - I have a weird setup), and the activity monitor (pretty netstat -aon). But, I am still struggling. Partly because it is more fun to play the game then it is to do packet analysis.

I need to find a way to reverse engineer the applications network related workflows, and then maybe set up some kind of uPnP - I think the TCP connections source port is going out Wan1, then Wan2 tries to send UDP data, and the game/chat server gets confused. It seems to be using an established TCP session to “unblock” firewalls and allow the UDP data through. I imagine there is some kind of property that would tie all these data streams together. Maybe sequence ID? Maybe just srcIP and port? Trying to decipher it and then coming up with a way to make the router understand what I need it to do seems impossible with the tools I currently have.

I was hoping to find something like DNS request yields list of IPs, one of those IP addresses should be the destination for some UDP traffic for the game. I was going to create some local DNS entries to only include the game servers with a latency of less than 100 ms. That would have been to easy - I am not having any luck. It looks like the actual game server IP addresses are exchanged by some method other than DNS.

I will look into the DNS Name routing again - maybe it has been changed since I tried it last. It only worked for names that were returned with a reverse DNS response - and that doesn’t work with many cloud providers.

For now, I just manually enable/disable rules that involve port ranges. I am starting to look for ways to restrict windows to user defined port ranges for applications. It would be so much easier if developers would document their games networking, but they seem to guard it very closely.

I use “Lowest Latency” on SpeedFusion for most UDP traffic (which games tend to be), I might change that to WAN Smoothing though once I swap one of the VDSL’s for Starlink, at the moment the DSL has lower latency than Starlink although apparently Starlink are aiming for 16-19ms so might actually end up being lower latency than my DSL.

I did actually try a game over SL by changing the priorities around so that it was preferred (put it on a sub tunnel where the wan priorities were set in that way) and it worked fine but SpeedFusion did have to move the traffic back onto the DSL a few times where it lost the tunnel to the fusionhub over SL (Might be loss of sats), I have faster set for heartbeat time and all I noticed was a brief stutter (which if WAN smoothing was on wouldn’t have even happened).

If you have the speedfusion license on your peplink it will be worth dropping a fusion hub somewhere on a cloud provider with a nearby Datacenter.

Also a “nearby” Datacenter in physical terms might not be all that nearby in internet terms, people trip up with this one quite a bit, having a server physically close is actually going to hurt rather than help if the nearest POP/IXP where your ISP interconnects to that provider is still several hundred miles away, as the traffic has to goto the POP/IXP and back.

Edit:

Forgot to say the “Lowest Latency” profile in speedfusion I believe is based off the healthcheck packets, unlike fastest response it can and will switch traffic onto a different WAN within the SF tunnel if the latency situation changes and that WAN becomes a better option.

Well, I tried some “out of the box” type of approaches, but it didn’t work. I tried using an exe called ForceBindToIP. It is supposed to hook into winsock api calls and force a bind to the specified IP address. My logic was “If I can get all the games connectivity tied to a unique IP – routing should be easy”. Unfortunately, the way Apex launches causes the ForceBindToIP to be ineffective. From what I can tell, the game creates a copy of itself in memory and then launches that in-memory version using its own command line. Some of the initial game launcher connectivity DOES actually use the IP I specify, but as soon as the game is started - it goes back to the default IP of the machine.

But, since some of the servers have connections established with this forced IP – I tried to use the “Persistence by destination” rule to try to “sticky” those endpoints to the WAN I wanted. Same destination IP, different source IP. It didn’t work, and after reading up on the Persistence algorithm - it was never going to. Persistence always uses the source IP in its decision making – even when destination is chosen as the “persistence key”.

However, I did find a way that seems to work for the games I am currently playing. Apex Legends, Fortnite, and Call of Duty are the games I am currently playing. I have created an outbound policy for each game title and put them at the top of the list.

Policy 1 - Apex_UDP - Any, Any, UDP 37000-38000 - DSL
Policy 2 - Fortnite_UDP - Any, Any, UDP 9000-9100 - DSL
Policy 3 - COD_UDP - Any, Any, UDP 3074-3078 - DSL
Policy 4 - TCP_443_WISP - Any, Any, TCP 443 - WISP
Policy 5 - TCP_80_WISP - Any, Any, TCP 80 - WISP
Default Policy - Least Used (which is usually the DSL link)

Using these outbound policies - I am able to play my games with steady latency (~65ms) on my 1.4Mbps/750kbps DSL link. My audio chats go out the WISP connection as most of the ones I use (Discord, XBox party chat, In game chat) seem to use UDP ports above 50,000. I really don’t care if the chat gets a bit garbled due to latency jitter, but good gracious – dying in a game because of jitter really drives me nuts. I am getting older and slower and I don’t need any technological handicaps!

1 Like