Software automates mission-critical business processes, from customer service to order fulfillment to employee onboarding. The speed at which such processes are performed is directly tied to the performance of the software that automates them. This performance has three dimensions: how fast it functions (sub-second versus multisecond response time); how often it’s available (99% uptime versus scheduled weekly downtime); and how these two factors change as usage increases (from dozens of concurrent users to thousands). Software performance is thus a limiting factor on business performance. A financial services company can’t provide a life insurance quote to a customer any faster than its quote engine can compute the appropriate rate. And an automotive supplier can’t start building parts if the software that delivers orders from its OEM customers has been taken down by unanticipated usage levels.
But many software development organizations still treat performance like an afterthought, paying it too little attention too late in the life cycle. Our interviews confirmed what we hear in our ongoing conversations with Forrester clients: IT shops habitually pay insufficient attention to performance during development, either ignoring performance altogether or simply measuring it and conducting perfunctory tuning before deployment.
Helping Business Thrive On Technology ChangeFebruary 28, 2006 Performance-Driven Software Developmentby Carey SchwaberUntitled Document 2006, Forrester Research, Inc. All rights reserved. Forrester, Forrester Wave, Forrester s Ultimate Consumer Panel, WholeView 2, Technographics, and Total Economic Impact are trademarks of Forrester Research, Inc. All other trademarks are the property of their respective companies. Forrester clients may make one attributed copy or slide of each gure contained herein. Additional reproduction is strictly prohibited. For additional reproduction rights and usage information, go to www.forrester.com. Information is based on best available resources. Opinions re ect judgment at the time and are subject to change. To purchase reprints of this document, please email firstname.lastname@example.org.B EST PRACTICESEXECUTIVE SUMMARYWhen software automates business processes, software performance is the limiting factor for business performance. A slow order processing engine necessarily means slowly processed orders. Even though software performance matters to the business, it s low on the priority list for most application development organizations. To meet business needs and to improve the e ciency of their own operations development shops must mature their performance practices from pure re ghting to performance-driven development.TABLE OF CONTENTSSoftware Performance Matters To The BusinessPerformance Veri cation Is Just The First StepBusiness And IT Partner To Set Performance RequirementsGet More Bang For Your Performance Testing BuckPerformance-Driven Development Is The Ultimate DestinationDesign Maps Out A Strategy For DevelopmentDevelopment Anticipates Test And MonitoringThree Ways To Move To Performance-Driven DevelopmentFinding Performance A Home On The Org ChartPerformance-Driven Development Spreads As IT Shops Become Service-OrientedSupplemental MaterialNOTES & RESOURCESForrester interviewed 13 vendor and 11 user companies, including: AppLabs Technologies, AVIcode, Compuware, Hewlett-Packard, HyPerformix, IBM, Mercury Interactive, OPNET Technologies, QA Labs, and Segue Software. Related Research Documents Software Quality Is Everybody s Business May 16, 2005, Best Practices Applying Process Control Principles To Application Performance Management July 12, 2005, Trends Performance Management And The Application Life Cycle February 11, 2005, Best PracticesFebruary 28, 2006Performance-Driven Software DevelopmentHow IT Shops Can More E ciently Meet Performance Requirements by Carey Schwaberwith Christopher Mines and Lindsey Hogan23611128Untitled DocumentBest Practices | Performance-Driven Software Development 2006, Forrester Research, Inc. Reproduction ProhibitedFebruary 28, 2006 2SOFTWARE PERFORMANCE MATTERS TO THE BUSINESS Software automates mission-critical business processes, from customer service to order ful llment to employee onboarding. The speed at which such processes are performed is directly tied to the performance of the software that automates them. This performance has three dimensions: how fast it functions (sub-second versus multisecond response time); how often it s available (99% uptime versus scheduled weekly downtime); and how these two factors change as usage increases (from dozens of concurrent users to thousands). Software performance is thus a limiting factor on business performance. A nancial services company can t provide a life insurance quote to a customer any faster than its quote engine can compute the appropriate rate. And an automotive supplier can t start building parts if the software that delivers orders from its OEM customers has been taken down by unanticipated usage levels. But many software development organizations still treat performance like an afterthought, paying it too little attention too late in the life cycle. Our interviews con rmed what we hear in our ongoing conversations with Forrester clients: IT shops habitually pay insu cient attention to performance during development, either ignoring performance altogether or simply measuring it and conducting perfunctory tuning before deployment. The results of this neglect can be grim: Slow in-store apps earn a retail CIO the ire of other executives. Performance problems in live applications disrupt the business, damage productivity, and result in lost revenues, ultimately damaging the credibility of the entire IT organization. When complaints about the performance of a 50 billion retailer s in-store POS apps ooded headquarters, management laid them in the CIO s lap. The pressure to resolve the problem and speed up store operations was ferocious, and the CIO ultimately hired a systems integrator to implement a cradle-to-grave performance management program. A health insurance company wastes millions of dollars every year. A 20 billion US health insurance company calculated that its inattention to performance prior to deployment notably in design, where more than half of the problems originated was resulting in several millions of dollars in avoidable costs every year. Resolving performance problems before deployment is more cost-e cient by at least one order of magnitude, and ignoring performance in the short term only racks up long-term costs that IT organizations can ill aord. A global pharma company purchases more hardware than it needs. When performance problems crop up after deployment, IT shops have few options other than to pile on hardware. A global pharmaceutical company that doesn t conduct adequate performance testing is unable to predict how an app will perform in production and consistently purchases several times the amount of necessary hardware. Cost-conscious IT management is now demanding an explanation for low server utilization levels. Poor predeployment performance practices thus lead to overspending on hardware. Hardware is cheap but not as cheap as avoiding performance problems in the rst place.1Untitled DocumentBest Practices | Performance-Driven Software Development 2006, Forrester Research, Inc. Reproduction ProhibitedFebruary 28, 2006 3 A telecom provider spends millions on app support instead of new app development. At one telecom company, each 15-second timeout in enterprise application integration (EAI) infrastructure resulted in a 4 call to an outsourced contact center; over the course of six months, this resulted in unanticipated support costs of almost 3 million funds that would otherwise have been used for new development eorts. Enterprise IT organizations struggle to drive down maintenance costs and fund new projects; Forrester s data indicates that the average IT organization spends 75% of its software budget on ongoing operations and maintenance.2 IT shops stuck in re ghting mode dedicate a larger portion of their budgets to maintenance than is necessary, diverting resources from eorts to deliver new value to the business. To gain control over the performance of the software they build, leading IT organizations move rst to performance veri cation and then to performance-driven development (see Figure 1). PERFORMANCE VERIFICATION IS JUST THE FIRST STEPAfter enough costly and embarrassing performance problems in production applications, IT organizations begin to assess software performance before deployment. To do so, they must implement strong performance requirements and performance testing practices. Business And IT Partner To Set Performance RequirementsWhile many rms working to improve software performance begin with performance testing, requirements are really the place to start. Without performance requirements, testers can determine the absolute performance of an application, but they can t determine whether or not that performance is su cient to meet business needs. To guide performance testing and to provide the development organization with performance goals IT organizations and their business customers should: Figure 1 Performance-Driven Development Shifts Performance Activities Earlier In The Life Cycler F rrr Rr h InPerformanceveri cationProductionDevelopmentTestingRequirementsDesignPerformance-drivendevelopmentFire ghtingMaturity ofapp devperformancepracticesStage of the software development life cycleUntitled DocumentBest Practices | Performance-Driven Software Development 2006, Forrester Research, Inc. Reproduction ProhibitedFebruary 28, 2006 4 Look rst at high-level business needs. Te rst hurdle in de ning performance requirements is determining business needs. For example, business stakeholders at an o ce supply retailer require that new functionality on their eCommerce site never degrade page load time by more than 10%. Why? So that users don t notice a sudden dierence in site speed. Current usage levels should guide but not dictate performance requirements for existing apps, and projected usage must su ce for new apps. To esh out high-level requirements, IT shops can use scenario-based analysis to determine how an app should perform in circumstances like increased usage and dierent usage patterns. A health insurance company might set response time, throughput, and resource utilization objectives that a claim adjudication app must be able to meet in the event that a new pharmacy chain is brought onboard. Set expectations by talking tradeos and chargebacks. Business stakeholders ofen have unrealistic expectations for performance, asking for piles of new features with unchanged performance or requesting sub-second response time because it sounds like a good thing. When the business understands the price tag for incremental software performance, requirements end up better re ecting real business needs. A UK life insurance company has found that building service levels into chargebacks results in more accurate performance requirements. And the Pitney Bowes IT organization is able to help business stakeholders understand necessary tradeos by asking questions like, Are you willing to pay an additional 10,000 for a third failover site? Remember that the target operating environment itself constitutes a set of requirements. Te sooner environmental constraints make their way into the development process, the better. IT operations groups commonly complain that development might have optimized performance, but they have not optimized it for the real-world production environment. Failure to fully communicate the nature of the production environment is one of the most common sources of performance problems. A US grocery chain reports that nearly all of its performance problems are ultimately traced back to an inadequate understanding of the production environment, and a pharma company indicates that this is one of the biggest failings of its outsourced application development eorts. Get More Bang For Your Performance Testing BuckConcrete performance requirements improve the relevance of performance testing. But even with good requirements in hand, performance testing is still di cult to do well, and it is expensive to do at all. To increase the e ciency and eectiveness of their performance testing eorts, IT organizations should: Cut the cost of replication by centralizing the performance test lab. Without a production-like test environment, performance testing is merely an academic exercise. The cost of maintaining a replica of the production environment typically runs into the millions of Untitled DocumentBest Practices | Performance-Driven Software Development 2006, Forrester Research, Inc. Reproduction ProhibitedFebruary 28, 2006 5dollars. One 30 billion-plus pharma company went so far as to spend 11 million on in-house environments for testing and staging 38 million worth of outsourced application development projects. These costs are prohibitive for a single-project team, but the investment is worthwhile when shared across multiple applications particularly when virtualization is employed. CSFB took a dierent approach: The rm saved several millions of dollars by using HyPerformix technology to model its production environment and simulate app performance instead of investing in a physical test lab. Improve and share performance testing services with a test center of excellence. Afer centralizing the test lab, the next step is to centralize all performance testing eorts rst for mission-critical applications, but ultimately, for all applications. IT shops with centralized testing get the most out of scarce resources like testing expertise and testing tools. Experienced performance testers command salaries starting at 75,000 and top out at around 150,000, and the software necessary for testing a few thousand concurrent users costs several hundred thousand dollars.3 Centralization also permits the implementation of common performance testing processes and raises the overall quality of test eorts. Centralized performance testing groups can guide or conduct performance testing or evolve from the former to the latter as they grow. Oshore performance test scripts creation to keep sta focused on optimization. Because of its highly iterative nature test, tune, and retest performance testing isn t as amenable to oshoring as other forms of testing. But rms can still achieve cost savings by sending the creation and execution of performance test scripts oshore. Tata Consultancy Services used this model for a performance engineering engagement with a Fortune 100 retailer. Capacity planning, design for performance, and performance optimization and tuning were all performed in an on-site performance center of excellence in close cooperation with client personnel, but test scripts were built and run oshore. Enable data and asset sharing by using testing tools that integrate with other life-cycle tools. Shops that use integrated performance testing and performance monitoring tools can share test scripts between their testing and monitoring tools, thereby reducing the total cost of script creation (see Figure 2). This kind of tool integration permits a bookstore chain to use the same test scripts to verify that an app will meet service-level agreements (SLAs) as it uses to monitor ongoing compliance in production. Compuware, Mercury Interactive, and Segue Software all support this type of integration. IBM and Microsoft take a dierent approach; their tools let developers perform diagnostics in their native development environments using results from performance testing and performance monitoring.Untitled DocumentBest Practices | Performance-Driven Software Development 2006, Forrester Research, Inc. Reproduction ProhibitedFebruary 28, 2006 6Figure 2 Vendor Oerings For Performance-Driven DevelopmentPERFORMANCE-DRIVEN DEVELOPMENT IS THE ULTIMATE DESTINATIONBy de ning better performance requirements and testing against these requirements before deployment, IT shops can dramatically reduce the number of performance problems that occur in live apps. But the amount of time and money required for testing varies widely and is di cult to predict; trouble in performance testing is a common cause of missed release dates and budget overruns. Some outsourcers even avoid performance testing engagements because their indeterminate lengths lead to tense client relationships. It s just not safe to rely entirely on performance veri cation. It is better for development organizations to prevent performance problems from appearing in code in the rst place by adopting concrete performance practices.r F rrr Rr h InAVIcode*Vendors with support for performance-driven development activities within their design and developmentproducts (e.g., support for performance modeling, performance optimization, or instrumentation) Through its pending purchase of Segue Software, Borland gains performance testing and monitoring tools!Mercury OEMs HyPerformix technology for sale as part of Mercury Performance CenterProductionDevelopment*TestingRequirementsDesign*HPMicrosoftBorland EmpirixOPNETCompuwareIBMMercuryHyPerformix!Untitled DocumentBest Practices | Performance-Driven Software Development 2006, Forrester Research, Inc. Reproduction ProhibitedFebruary 28, 2006 7Design Maps Out A Strategy For DevelopmentForrester clients commonly report that the majority of their application performance problems are rooted in design decisions. Performance-aware development organizations architect application performance component by component and then do the math to ensure that aggregate performance levels meet business requirements. A US health insurance company turned in this direction after discovering that architects, service providers, and database administrators (DBAs) weren t coordinating their component-level performance targets to ensure that they all added up.In a performance-driven development organization, the design team maps out a strategy for meeting performance requirements by: De ning performance objectives for application components. Shortly afer aggregate performance requirements are identi ed, mature IT shops convene design sessions to document performance objectives for low-level application components. These performance objectives serve as goalposts for development to build toward and test against. The developers at one retail company use these component-level performance objectives as pass/fail criteria for their early performance tests. When development shops document performance objectives at the component level, it s also easier for them to diagnose and resolve any problems identi ed later on in performance testing and monitoring. Securing performance contracts for components maintained by other groups. It s a rare application that runs in isolation. Better de nition of the performance of other IT services like the network, Web services, or directory servers leads to better prediction of the aggregate performance of the application. To this end, a major grocery chain has charged a dedicated resource an ITIL guy who understands SLAs, OLAs, and UCs with helping IT better de ne performance contracts for internal services.4 The rm s Web architecture lead expects that having these kinds of givens to use during design sessions will enable more e cient design and, ultimately, cost savings. Modeling application performance to validate design decisions. Documenting performance targets for application components like databases and Web servers is just the start. With the necessary numbers in hand, architects must proceed to do the math. Rules of thumb and spreadsheet-based models are the most commonly used techniques, but IT shops can also look to vendors like HyPerformix and OPNET Technologies for tools that guide this process, indicating what data is necessary and crunching the numbers to produce aggregate performance metrics. Until it began using HyPerformix to guide its performance-related design decisions, a nancial services rm had to pull back and rework one- fth of its applications. The rm estimates that using performance modeling tools during design has saved it 1.5 million far more than its 300,000 investment.Untitled DocumentBest Practices | Performance-Driven Software Development 2006, Forrester Research, Inc. Reproduction ProhibitedFebruary 28, 2006 8Development Anticipates Test And MonitoringWhen it comes to performance, the primary responsibility of the development organization is to execute against design. But performance-driven development also requires developers to ensure that they have done this successfully and to prepare for the eventually that they haven t. By doing so, developers protect their own interests, as well as those of the IT organization as a whole, by reducing the amount of time they spend downstream resolving performance problems during test and production. To this end, developers can: Check their work by testing early, ofen, and automatically. Performance-driven development organizations compare the performance of application components to goals speci ed during design. By providing developers with easy access to the performance testing lab to encourage this practice, a 15 billion retailer nds that it experiences far fewer problems during nal performance testing mostly integration issues. Some development shops even execute performance tests automatically with every build. ThoughtWorks, a systems integrator, uses this technique to keep an eye on total software performance as development proceeds and to intervene when performance degrades unexpectedly. Minimize the time to problem resolution by instrumenting code for test and monitoring. Instrumentation indicates to those testing and monitoring performance where problems might occur, what problems might look like, and what data can be gathered to facilitate problem resolution.5 A multichannel retailer expects that adding JMX instrumentation to its corporate applications will dramatically decrease the time it takes to resolve performance problems. Driving down the time required to instrument software will do much to encourage the practice. To this end, HP and AVIcode products facilitate instrumentation in Eclipse and Microsoft Visual Studio 2005, respectively, and IBM will roll out new Eclipse plug-ins for instrumentation early in 2006.THREE WAYS TO MOVE TO PERFORMANCE-DRIVEN DEVELOPMENTTe natural path to performance-driven development is an incremental one, with IT shops rst de ning and testing against performance requirements but ultimately looking to increase the e ciency of meeting those requirements (see Figure 3). IT organizations can use any of the following three techniques to implement performance veri cation or performance-driven development, even mixing and matching among them: 1. Getting management onboard for a top-down implementation. Su cient management support can be enough to kick-start and sustain the journey toward performance-driven development. The CIO of a retail company, the chief architect of a health insurance company, and the VP of software quality at a retail company all initiated such eorts when pressure from the business became overwhelming. In other organizations, budgetary pressures inspire executives to insist on improved performance practices. This was the case for the telecom CIO saddled with almost 3 million in unanticipated support costs from timeouts in his EAI systems. Untitled DocumentBest Practices | Performance-Driven Software Development 2006, Forrester Research, Inc. Reproduction ProhibitedFebruary 28, 2006 9When the cost of poor performance practices is less evident, ROI calculations that weigh the cost of adopting performance-related techniques against the projected cost avoidance can make the business case for performance-driven development clear (see Figure 4).6 2. Laying down the law in the form of release acceptance criteria. In many enterprises, problems with software performance come to a head when IT operations refuses to deploy any more apps that haven t been certi ed as meeting performance requirements. The imposition of strict release criteria can get IT organizations to adopt performance veri cation techniques. In fact, when it s the production department that is insisting on better performance testing, it s often the production department that ends up with the responsibility for conducting it. And adoption of performance veri cation often leads to adoption of performance-driven development, as development organizations realize that they can t aord to put o performance until the end of the life cycle.3. Setting up a shared service and letting demand dictate its growth. When the funding for a formal implementation of performance veri cation or performance-driven development isn t available, IT shops sometimes evolve toward a shared services model, with one group building expertise and charging other teams for services like design for performance, performance testing, or performance engineering. Demand for this shared service grows in proportion to its ability to remove performance problems early in the life cycle, reducing the time that its internal customers spend on defect repair and improving the satisfaction of its customers own business customers. Figure 3 Evolving Toward Performance-Driven Development Adds Bene ts, Reduces Costs Source: Forrester Research Inc.Performanceveri cation" Provides acceptance criteria for makinggo/no-go decisions" Reduces cost of problem resolution" Improves accuracy of hardwarerequirement forecasts" Investment in test personnel, processes,tools, and environments" Resolving performance problems isstill more expensive than avoiding them" Testing can become a bottleneck, causing missed release datesPerformance-drivendevelopment" Ensures that acceptance criteria are met" Minimizes cost of problem resolutionand number of problems to resolve" Reduces time-to-market by avoidinglate-breaking surprises" Optimizes use of hardware resourcesRequires the foresight to attend toperformance early in the life cycle andthe eort of doing soFire ghting Requires little up-front investment" Disrupts the business" Damages IT credibility" Maximizes cost of problem resolution" Results in overspending on hardwareBene tsCostsUntitled DocumentBest Practices | Performance-Driven Software Development 2006, Forrester Research, Inc. Reproduction ProhibitedFebruary 28, 2006 10Figure 4 Sample Economics Of A Move To Performance-Driven Development Finding Performance A Home On The Org ChartTere is no easy answer to the question of who should own software performance. As companies move to performance veri cation and, ultimately, performance-driven development, they tend to place responsibility for performance rst with either testing or operations and later on with a subset of the architecture team or with a cross-functional, application-speci c team. Testing and IT operations focus on the end of the life cycle. Many shops initially place responsibility for performance testing with quality assurance (QA), only to ultimately shift it to a team within IT operations. Why? IT operations sta tends to be more technical than most QA resources, and IT ops has intimate knowledge of production environments, as well as access to tools for creating and maintaining production-like test environments. Plus, IT operations is the rst line of defense when performance problems arise and is invested in stopping poorly performing apps from being deployed. But both the testing organization and IT operations have di culty in uencing design and development activities. For this reason, they are often unable to get shops past pure performance veri cation to performance-driven development. Architecture and cross-functional application teams take a longer view. Te architecture team is well-positioned to focus the development organization s attention on performance right out of the gate. But rms should look to architecture for help with performance-driven development only once they have nailed performance veri cation, because architects have relatively little in uence over later life-cycle activities like testing. In contrast, a cross-functional, r F rrr Rr h In1x2x50x100x10xCost ofproblemresolutionFire ghting0%0%0%100%0%% resolved10%0%60%30%0%% resolved10%40%20%5%25%% resolvederformance-drivenevelopmentPerformanceveri cationResolutioncost for100 defectsat x = 100 0 0 0 1,000,000 0cost 1,000 8,000 100,000 50,000 25,000cost 1,000 0 300,000 300,000 0cost 1,000,000 184,000 601,000ProductionTestingDevelopmentDesignRequirementsUntitled DocumentBest Practices | Performance-Driven Software Development 2006, Forrester Research, Inc. Reproduction ProhibitedFebruary 28, 2006 11application-speci c team can help to ensure that performance gets the attention it needs throughout the life cycle. At T-Mobile, groups of application experts who are responsible for an application s health from inception through retirement take on this role. These teams transcend organizational boundaries, centering around applications, rather than around life-cycle disciplines.PERFORMANCE-DRIVEN DEVELOPMENT SPREADS AS IT SHOPS BECOME SERVICE-ORIENTEDLess than one- fth of IT organizations conduct any load testing today; for the vast majority of IT organizations, deployment is an act of blind faith, and performance problems are addressed only in live applications.7 And most of the shops that conduct load testing do so only for a very small minority of projects and do so in an ad hoc fashion, rather than as part of a mature performance veri cation practice. Performance-driven development organizations are very much in the minority today, but this will change during the next ve years. By 2010, performance-driven development concepts will be more widespread, and one-third of the organizations that today do little more than load testing will have adopted performance-driven development practices. Here s why: Service-level management (SLM) puts a sticker price on poor sofware performance. To demonstrate their value to the business, IT organizations are increasingly relying on SLM to report on the system performance and identify areas of potential improvement.8 There is more to SLM than response time and availability, but this is nonetheless where most user organizations start. As SLM spreads, aided in no small part by rapid adoption of the IT Infrastructure Library (ITIL), concrete performance requirements will become more commonplace as will contractual penalties for failing to meet these requirements. Service-oriented architecture (SOA) requires more mature performance practices. Te continued decomposition of software from the monolithic mainframe to distributed systems to SOA has dramatically increased the di culty of managing software performance. More apps built from discrete, integrated components mean more opportunities for performance problems within each component and at each integration point. This kind of complexity requires constant and careful management throughout the application life cycle. As enterprise adoption of SOA continues to take o and our data indicates that it will more enterprises will move to performance-driven development.9Untitled DocumentBest Practices | Performance-Driven Software Development 2006, Forrester Research, Inc. Reproduction ProhibitedFebruary 28, 2006 12SUPPLEMENTAL MATERIALCompanies Interviewed For This DocumentAppLabs TechnologiesAVIcodeBorland SoftwareCompuwareHewlett-PackardHyPerformix IBMMercury InteractiveMicrosoftOPNET TechnologiesPowerTestQA LabsSegue SoftwareENDNOTES1 In simpler times, application issues were limited to design, and code and performance were a matter of hardware resources. Performance problems are now more likely to originate with con guration, architecture, connections to databases, or internal and external systems. Resolving performance issues in production by throwing hardware at them is the most costly and ineective approach possible. Yet because hardware is cheap, it is often the remedy of choice. More often than not, it will have the eect of aspirin on a broken leg: temporary relief, at best. See the February 11, 2005, Best Practices Performance Management And The Application Life Cycle. 2 Driving down maintenance spending is a perennial problem for IT organizations. The average IT shop spends just 75% of its software budget on ongoing operations and maintenance, as opposed to new investments. Source: Forrester s Business Technographics November 2005 North American And European Enterprise Software And Services Survey. 3 Salary data for performance testers was gathered from Monster and Yahoo! HotJobs in January 2005. Pricing for load testing tools varies widely from vendor to vendor. Leading load-testing tool vendors charge between 10,000 and 25,000 for a load controller and an additional 75 to 100 per virtual user. A shop testing 2,000 virtual users would likely need to invest 225,000 in load testing software license fees. 4 The IT Infrastructure Library (ITIL) speaks of operating-level agreements (OLAs) and underpinning contracts (UCs). SLAs and OLAs have similar structures, but they involve dierent parties and have dierent purposes. SLAs are documented agreements between the IT department and its customer. OLAs are documented agreements within IT that do not involve customers or external suppliers. UCs are between the IT organization and third-party suppliers. They are legal documents that typically specify penalties for failure to meet speci ed obligations. OLAs and UCs enable IT organizations to meet their SLAs.5 Java Management Extensions (JMX) is a Java technology for managing and monitoring devices, applications, and service-driven networks. Windows Management Instrumentation (WMI) provides access to information about objects in a managed environment and machines running Windows. Application Untitled DocumentBest Practices | Performance-Driven Software Development 2006, Forrester Research, Inc. Reproduction ProhibitedFebruary 28, 2006 13Response Measurement (ARM) is a C and Java technology for managing application availability, application performance, application usage, and end-to-end transaction response time. For more information about JMX, see http://java.sun.com/products/JavaManagement/. For more information about ARM, see http://www.opengroup.org/tech/management/arm/. For more information about WMI, see http://msdn.microsoft.com/library/default.asp?url=/library/en-us/wmisdk/wmi/wmi_reference.asp.6 To arrive at estimated returns, secure records from on-performance problems to calculate the cost of repair (e.g., time to resolution, number of resources involved, and hourly wages for involved resources) and the cost to the business (e.g., lost revenues and productivity hits). Then use projected improvements in defect removal e ciency for each performance-related technique to arrive at estimated cost-avoidance gures.7 Leading load-testing tool vendors have penetration rates of less than 10% among IT organizations that have deployed SAP and Oracle applications, which are invariably mission-critical apps and thus the rst that IT shop load test. 8 SLM/BSM dynamically links business-focused IT services to the underlying IT infrastructure. See the December 1, 2004, Best Practices Best Practices For Service-Level Management, and see the February 1, 2006, Market Overview BSM Is Coming Of Age: Time To De ne What It Is. 9 Forrester surveyed 717 IT decision-makers at North American and European enterprises about their use of SOA. Sixteen percent of respondents have an enterprise-level commitment to SOA; 19% use SOA selectively without a clear strategy; and another 13% will pursue SOA within 12 months. Source: Forrester s Business Technographics November 2005 North American And European Enterprise Software And Services Survey. Untitled DocumentForrester Research (Nasdaq: FORR) is an independent technology and market research company that provides pragmatic and forward-thinking advice about technology s impact on business and consumers. For 22 years, Forrester has been a thought leader and trusted advisor, helping global clients lead in their markets through its research, consulting, events, and peer-to-peer executive programs. For more information, visit www.forrester.com.AustraliaBrazilCanadaDenmarkFranceGermanyHong KongIndiaIsraelJapanKoreaThe NetherlandsSwitzerlandUnited KingdomUnited StatesHeadquartersForrester Research, Inc.400 Technology SquareCambridge, MA 02139 USATel: +1 617/613-6000 Fax: +1 617/613-5000Email: email@example.comNasdaq symbol: FORRwww.forrester.comH elping Business Thrive On Technology ChangeFor a complete list of worldwide locations,visit www.forrester.com/about.Research and Sales O ces37519For information on hard-copy or electronic reprints, please contact the Client Resource Center at +1 866/367-7378, +1 617/617-5730, or firstname.lastname@example.org. We oer quantity discounts and special pricing for academic and nonpro t institutions.