Keeping wireless networks running smoothly can be a challenge, especially when you’re working with unlicensed frequencies and support hundreds or thousands of wireless connections. A tactic used by WISPs and other wireless network operators is to change frequencies on noise-affected devices, and changing frequencies may very well smooth out problematic connections. However, there is always the potential for unintended negative consequences, and this is what this post is about: being able to identify problems that arise due to frequency changes.
Now, this post covers a very specific situation and is otherwise a narrow subject, but the general idea that it represents can be profoundly powerful for WISPs or anybody who works with multiple different wireless access points within a small area.
Network monitoring – a must for WISPs
Without monitoring, you’re really operating your network in the dark. I’m not just referring to infrastructure monitoring, but individual client monitoring as well. The company I work for didn’t have any monitoring aside from uptime monitors on critical devices up until about 6 months ago. The vantage point we have over the network now is beyond a night and day difference: it’s like going from a single candle trying to brighten a football stadium to every stadium light shining at 400,000 lumens. It’s that significant.
We use LibreNMS for the time being, but Observium is similar. There are also many other options for network monitoring out there. The type of after-effect frequency change analysis this post reflects isn’t possible without detailed monitoring on client and/or infrastructure devices.
Problem: slow speeds with multiple clients
We switched out some clients using old hardware (Trango 900s) to Mikrotik 900s. Once we switched them out, I ran tests between the client radios and APs and saw sub-par speeds. In my experience, if the Mikrotik 900s stay connected, they work at about half or better of the possible speed based on the channel width. At 10Mhz though, several clients were showing 1.5Mbps-4Mbps capable versus 7Mbps-16Mbps, plus one client was dropping regularly or getting no throughput despite -62 TX/RX, though CCQ was kind of unstable.
The day after the switchout, I changed the frequency on a nearby AP to see if it’d make a difference for the speed issues and the dropping client. This change took place around 10:30am, which you can see the difference it made to this customer’s connection:
From that time until around 5pm when some other changes happened, latency from to the radio went from around 150ms to 25ms. You can also see a change in another customer radio’s stats, though it’s less significant:
In terms of effects to the AP where we changed the frequency, we can pull up the monitor for the AP and see which stats changed when the frequency was changed. Which stats are available to you depend on the equipment you’re running, the equipment’s firmware version and what information your monitor collects. This is from a Ubiquiti 900Mhz running 5.6 firmware:
You actually have a lot of different metrics to look at. Here’s a different view, and you can see on the right-hand side the big drop in the RX rate (on the 23rd) when we turned off the Trango AP & turned on the Mikrotik AP. But, there was a loss of RSSI when changing the frequency on this AP.
For us, this loss wasn’t a big deal because:
- None of the clients connected to the AP where we changed the frequency look worse off and we saw clients getting advertised speeds
- We’re taking this AP down soon to merge all clients to a single 900 AP on the tower with an omni
You can easily extend this analysis out to other towers that are within frequency range, and clients out there. In cases where the impact is significant enough, you’ll see things like increased latency to client radios, drops in signal, TX/RX or CCQ.
Easier post-frequency-change monitoring
Personally, I don’t like changing frequencies on P2MP (point-to-multipoint) connections. Well, backhauls, either. It’s nothing but a cat and mouse game if you legitimately think of changing frequencies as a go-to ‘fix’ for problematic connections. Yes, sometimes it needs to be done, and in this case — when you’re putting up a new AP — you need to test the real-world results of the frequencies available. From my viewpoint, signal levels are almost meaningless without the context of the actual sustainable speeds with those signal levels in that area. That’s another subject for another day, though.
When you change a frequency on equipment, it’s a very good idea to keep an eye on other devices or connections in your network that those changes may effect. With 900Mhz, for example, you may make a change on Tower A and everything looks good, but you inadvertently wiped out a connection on Tower C some 10-miles away. This is a possibility.
What is also possible is that frequency changes you make cause changes that are not immediately perceptible. For example, you make a change to an AP on Tower A, and you knock down the maximum throughput on a different AP on a different tower. Or worse, you’re running a 5Ghz p2mp AP and knock down the stats on a nearby backhaul. This can and does happen, and it’s not something you’d likely see immediately, even if you take screenshots of surrounding AP stats before making frequency changes.
This is part of the reason why monitoring your network, especially with the ability to look at historical data, is so important. Monitoring gives you the ability to see the long-term effects of changes that you make within your network. Plus, it gives you a snapshot in time, at each and every monitor interval, to see the immediate effects of changes that you probably couldn’t pick out on your own.