IT organisations have invested millions of pounds in implementing fault management tools and processes to maximise network availability. However, while availability management is critical, infrastructure reliability has improved to the point at which 99.9% availability is not uncommon. At the same time, network traffic is growing in both volume and complexity, creating performance issues. This is why real network and application improvements require focusing on performance, not just availability.
Below are eight rules that will help your organisation take a performance first approach to network management. This approach will not only help you understand how performance is impacted by infrastructure/application changes, but also enable you to manage your network for application performance, which after all, is the most important thing.
Rule 1: If you can’t measure it, you can’t manage it
Network and application performance issues are growing dramatically due to data centre consolidation, the rise of multimedia traffic, increasing numbers of remote users, and other trends. As a result, the responsibility for application delivery is increasingly falling on the shoulders of network professionals. Measuring infrastructure availability and utilisation alone is no longer enough to understand network health and make informed management decisions.
Today’s network professionals must shift their focus from fault management - which is largely under control - to performance-based management in order to deliver better services and make themselves more relevant to the business units they serve. It is crucial for organisations to implement application service level agreements (SLAs) with baselines to measure against, providing a quantifiable goal to work towards and a way to measure progress. If you aren’t measuring performance metrics, you are managing to availability rather than performance, and in today’s IT environment, that’s not enough.
Rule 2: Performance is relative
The best way to understand the notion that all performance is relative is to ask someone who uses a networked system or application: “Is a three-second application response time good or bad?” The answer is, it depends. If the normal response time is ten seconds, a three-second response time is very good. But if the normal response time is one second or less, three seconds is not very good at all. For the same measurement, different circumstances lead directly to different interpretations.
Performance is usually based on either previous experience – ‘it took 15 seconds to download this page yesterday’ or user’s changing expectations. Employees nowadays expect their SAP or customer relationship manager (CRM) system to perform as fast as eBay’s website.
What users care most about are large variations in performance. Therefore, what should concern you most from a performance management perspective is finding and addressing the places in your network where there are large variations in performance.
Rule 3: Link utilisation is insufficient
Utilisation is not an effective metric to assess performance.
The best indication of how applications are performing for the end user is to measure response times by monitoring real traffic. High utilisation is only a problem if it actually impacts application performance. Response time measurements, not utilisation, should be the foundation for effective network performance monitoring.
Rule 4: Bandwidth doesn’t solve all your problems
Increasing bandwidth is not a panacea for solving performance problems. Make sure you understand the cause of the problem before taking corrective action like throwing bandwidth at it. Delay for example could be cause by the server, the application or even the transit path. The ability to measure the right performance metrics is key.
Find your next job with computerworld UK jobs