I have a house with peplink AP One Minis, one per major room.
With this setup, I see 5GHz SNR like this:
-44 dBm for the line-of-site AP
-80 dBm for the AP in another room
The two APs are set to 40MHz channel width, on non-overlapping 5GHz channels.
This source Link suggests that wifi 5 (AC) can get up to 200 Mbps on a clean 5GHz 40MHz width channel, and that’s about the maximum that I see in practice.
It says that an 80MHz channel might go as high as 433Mbps.
My question: would it be better to stay with my current channel setup? Or would I get better performance if I put both APs on the same (overlapping) 80Mhz channel width?
In my deployments I do 40MHz non-overlapping channels for several reasons. I’d rather have a (relatively) clean RF environment and somewhat limited speeds than have faster speeds and a more complex RF environment.
Well, that’s the "question of the year. I can’t tell you how much time we have spent trying to optimize wi-fi over the years – and at dozens of locations. @MarceloBarros and @ChristopherSpitler both bring significant experience and I can say what they say (above) has great merit. The references they cited are also good. I can also say when troubleshooting a goofy wi-fi situation a year or two ago I submitted a ticket to Peplink where in a minor AP FW issue was subsequently identified (now fixed) and I took the opportunity to ask for Peplink’s thinking about channel use. Essentially, I was advised to use the same channels on APs within hearing distance of each other unless there was an issue in doing so.
A gratuitous aside, which you may or may not find to be helpful: As I write this, I find myself in an extremely dense wi-fi environment. The three Peplink APs (using the excellent, older wifi5 versions at this location) see dozens of foreign APs with signal strengths of -80dBm or better. (These are a mix of various owner-furnished routers/APs and AT&T supplied Humax BGW320s, with the occasional goofy wi-fi extender tossed in for spice.) In brief, it is a mess, RF-wise.
I’ve played with this a bit as time permits. The “quasi-Pareto-optimal” solution at this location is presently to:
Let all three APs use the same frequencies on both 2.4 and 5GHz. Do not set it to “auto” as the Humax devices are reasonably smart and that is their default (few people will change this.) I want as many of them as possible to see our system – always – and avoid us to the extent possible. I don’t want to set a setting of Auto here to let us become a “moving target.”
Use 20MHz on 2.4GHz. 2.4GHz is primarily used for IoT/M2M devices. Reliability is important and blinding speed is not. If “smarter” devices need 2.4GHz they’ll still work, just not as fast as if on 5GHz. And, they won’t be there long (see below.) IoT/M2M clients are often incredibly dumb and very often have never had a FW upgrade – and never will. The dumber they are the more they will like 20MHz channels. And, their reliability is a key consideration.
Use 80MHz on 5 GHz. I’ve tested both 40 and 80 and don’t see much difference in the present RF-rich environment, although I am very well aware of the arguments for 40MHz channel width – which @ChristopherSpitler sets forth succinctly.
The “Preferred frequency” is set to 5GHz. (This helps “higher-end” clients communicate faster; see point 2, above.)
We’ve tested (here and at a few other locations) the “Assisted Roaming” feature in Peplink’s recent FW. In the present environment, “aggressive” seems to work best. “Moderate” works OK “Off” is not so good. (We, and others, raised issues with Peplink regarding clients “hanging on” to APs for far too long in a degraded RF environment. This relatively new parameter helps – a lot.
Minimum signal strength and system behavior regarding how the AP will respond with lower than desired signals are very important. Here are the present settings here:
We’ve found, through experimentation, that these work at this location. One thing to note: If a client on 5GHz gets really weak we want to disconnect it rather than drag the bit rate on that band down – which is why the “Disconnect” box is checked.
I’m sure others have their own thoughts on this but I thought I might share a solution that has worked in the present “RF rich” environment. Every location is different – some dramatically so. (Our solution for a 200,000sq ft warehouse is nothing like what I have set out above.)
Regarding this one: is there any reason you set a value for the 2.4GHz threshold, but did not check the Disconnect Clients checkbox?
My understanding is that if you don’t check the checkbox, nothing happens?
This would make sense to me, as we are basically saying “Allow a device with a good 5Ghz signal, but if it’s a weak 5GHz signal, kick them off and let them use the 2.4GHz signal, no matter how weak”.
Hi @soylentgreen. Good observation. On this network the principal use of 2.4GHz is for IoT/M2M devices. We’ve found it best not to kick them off. Better the 2.4Ghz run slowly (which is what happens when signal strengths drop) than to lose contact with the (usually very dumb) clients. For 5GHz, we want to kick them off so as not to drag those channels down into the mud. A client can always use 2.4GHz until 5 “works” for them again. So, in the present instance the decision not to dump a weak client from 2.4GHz was intentional.
(Sorry for the late reply – I’ve spent most of the last three days in the air, it seems.)