How to avoid costly application failures

With IT becoming the virtual bricks and mortar of modern enterprises, is it any wonder that most CIOs believe IT failures are their greatest risks?

Share

With IT becoming the virtual bricks and mortar of modern enterprises, is it any wonder that most CIOs believe IT failures are their greatest risks? A recent report from the Economist Intelligence Unit found that business executives ranked the threat of IT failures ahead of such headline grabbers as terrorism, natural disaster and regulatory constraints. Historically, neither the business nor IT could measure application risk because they lacked useful measures of the underlying weaknesses in the software. Today, CIOs can quantify application risk and use this information to make management decisions.

Four converging challenges elevate the importance of measuring application risk:

  • Application malfunctions: Although many inconvenience only a few customers, some threaten the business. Inoperable corporate websites, corrupt financial data, personal data breached and account statements miscalculated are only a few of the fiascos.
  • Business agility: Directly linked to the internal quality of its critical applications, this declines unless the quality of each application is sustained throughout its useful life. As software quality erodes with age, the ability to rapidly implement new functionality declines just as the demand for this functionality accelerates.
  • Supplier dependence: Critical application software is increasingly supplied by external sources such as contractors, vendors and outsourcers. The quality of externally supplied software presents a business risk that is difficult to control proactively, since service level agreements usually focus on post-delivery performance.
  • Application ownership costs: Without constant attention to software quality, today's state-of-the-art application quickly devolves into tomorrow's legacy monstrosity. As low quality applications age, the percentage of time devoted to understanding their construction and fixing their defects increases relative to the percentage of time spent implementing new business functionality, severely diluting their ROI.

Why is application risk harder to measure and control today?

Modern business-critical applications are no longer developed as monolithic systems written in a single, or at most two languages. Rather, these systems consist of millions of instructions, written in various programming languages, interacting with a complex data model that is controlled by hundreds of business rules. For example, a simple J2EE application may be composed from multiple technologies including JSP/JSF, JavaScript or HTML for the presentation layer, XML for the coordination layer, Java for the business layer and SQL for the database layer.

Controlling the risk of a business critical application is therefore a multi-technology challenge where many quality problems occur at the interface between technologies. The technical complexity of such polyglot applications exceeds the expertise of any single developer or project team because of the multiple languages, technologies and platforms involved. For this reason, the quality of an application is more than the sum of the qualities of its component features. Application quality should be treated as an additional level of quality that presents unique risks to the business.

Figure 1 displays the complex web of interactions among myriad languages and technologies that must be mastered to ensure application quality and reduce the business risks of a modern application. Is it any surprise that 50 percent of the effort required to modify a business application is spent trying to figure out what is going on in the system and how it is connected?

Why is testing insufficient?

The traditional solution to application quality risks has been testing. However, testing can provide only part of a quality solution. Most tests are based on the application's requirements. Consequently, they focus primarily on whether an application functions correctly, in other words whether developers "built the right thing." It is typically the non-functional aspects of the application, whether developers "built it right", that cause devastating outages, performance degradation and security leaks during business operations.

To add even more challenge, few modern business critical applications are developed in a single project. Rather, the multiple subsystems that provide business functionality, data management, user interface, web access and other capabilities are often developed in separate projects on separate continents by separate organisations. Most quality practices have been designed for use on and by a single project and are focused on evaluating an application subsystem. Unless this distributed application work is integrated from a quality perspective, problems in some of the critical interactions among technologies that produce the biggest business headaches can go undetected.

Find your next job with computerworld UK jobs