Share

I love to spend time benchmarking and otherwise testing wireless LANs. This, as it turns out, is much more difficult than testing wired LAN equipment.

The reason for this is that signals sent over a wire tend to stay on the wire and are mostly immune from external interference, while radio signals in the real world (also known as "freespace") bounce around a lot. They're subject to all kinds of interference and various forms of fading, among many other artifacts of radio communications. It's also impossible to predict if, under any given set of circumstances, a radio signal will in fact propagate from Point A to Point B. This makes benchmarking in an absolute sense quite difficult, as I previously discussed in columns about benchmarking and about dealing with interference.

Despite the difficulties, we can do comparative studies if we can maintain very similar radio environments between benchmark runs, including close-to-identical distances between access points (AP) and clients, identical numbers of both, and so on. Such efforts can, in fact, yield very good results that can help you decide which vendor or product to select.

Scaling up the tests costs money

But such real-world testing is typically practical only with small numbers of residential-class APs and clients, which brings up today's subject: how to test meaningful configurations of enterprise-class wireless LANs. This is more than an academic question - over the next few years, thousands of corporations will decide which large-scale WLAN to deploy across their potentially large facilities and campuses. Making the wrong choice or picking an unverified product could have disastrous consequences. It's important to show that a given product's performance will scale as the WLAN becomes very large.

But testing more than a couple of enterprise-class APs and clients can get tricky. First, you need a facility large enough for a realistic scenario. Installing each AP often requires a lot of wire.

Switches and controllers must be configured. Appropriate benchmarking suites must be obtained and similarly configured. Then, you take a few runs on one system, then repeat the process on more systems, all the while monitoring for interference and anomalous results. Most companies don't have a sufficient budget - in dollars or time - for this type of evaluation.

Virtual benchmarks could save time

These problems led me to explore another approach for large-scale testing, which I call virtual benchmarking. Imagine being able to use a piece of specialised test equipment for benchmarking and, in the bargain, creating a 100 percent repeatable environment, thus leveling the playing field in a way that isn't possible in the real world. You set it up, press a button and run tests that predict performance in the real world.

To verify the viability of this approach, I recently spent a weekend testing different WLAN configurations using equipment from VeriWave. VeriWave's products are normally used by product design engineers, but we decided to see if - and how - they could be valuable to end users. We also decided to compare the results we obtained in the synthetic environment against those from the real world. For the latter, we used a Faraday cage, which provides isolation from external sources of interference. The bottom line was interesting: We found an excellent correlation between results from the virtual benchmarks and the Faraday cage. I'm encouraged enough, based on this experience, to try much larger configurations.

You can read more on this subject and the specific results in this Technical Note (download PDF). And, if I do get the opportunity to try a larger configuration, I'll let you know in this space. In the meantime, keep your eye on the virtual benchmark concept - it may be exactly what you need when making that big purchasing decision.

Craig J. Mathias is a principal at Farpoint Group, an advisory firm specializing in wireless networking and mobile computing. This article appeared in Computerworld