Copyright 2010 TeamQuest Corporation. All Rights Reserved.
All objects that have importance to the performance of a server are represented in the
model that describes the system.
Once the model has been built, different scenarios can be evaluated by changing
transaction intensity or moving workloads between different models. The predicted
response time or throughput of the transactions tells you whether the scenario was
successful or not. It s possible to focus on the relative difference between time spent
queuing versus time spent working in the system in order to remove the need for
explicit thresholds for every transaction type. A simple rule of thumb regarding that
relationship will give you a good idea about the state of the configuration scenario.
This type of analytical modeling offers a predictive, rapid and repeatable process for
optimizing mixed workload environments. With analytic modeling the quality of the
result is less dependent on the individuals executing it. It also avoids the common
mistake of assuming that performance will degrade linearly as workloads are stacked.
Synthetic Load Testing
The goal here is to produce synthetic transactions that mimic real life scenarios as close
as possible. To get things right you need to closely examine the operating environment
to find the right mix of transactions and their concurrency, develop repeatable test
cases based on those transactions and run lengthy performance tests on equipment
identical to the production environment (which might force you to invest in a parallel
test environment). Plus you need to define success criteria in terms of response times
for each transaction type.
Load testing offers a high level of accuracy if done right, but in most cases the cost
and long test cycles can t be justified. It might be better suited for once in a lifetime
quality assurance activities prior to going live with a critical service, not recurring IT
Service Optimization exercises.
So which of the above methods should you pick? Most important of all is to at least have a
strategy and not just rely on reactive mechanisms of the framework to take care of capacity
management. After that has been established, the optimal choice is often a mix of different
methods. You probably don t want to spend too much time analyzing less important utility
applications, just as you can t afford the risk of a quick and oversimplified analysis of a business
critical service. It s important to have a comprehensive toolbox that lets you choose the right
method for different circumstances.
Commodity server virtualization vendors would like you to believe that the reactive performance
management technologies built into their platforms is all you need to attain optimal performance.
The truth is, reactive technologies such as dynamic resource scheduling and migration are
helpful, but they are not a complete solution and they do not necessarily make IT Service
Optimization easier. The added complexity from virtualized environments actually makes it
more difficult to be sure that you are getting everything you can from your systems.
6 of 8 Capacity Management is Crucial in Virtualized Environments