Managing your mainframe in the new hyper-distributed world

In the fast-paced world of IT, a lot can change in a short space of time. Yet despite this continually evolving landscape of innovation, the trusty mainframe still holds its position at the heart of the enterprise and is likely to continue its...

Share

In the fast-paced world of IT, a lot can change in a short space of time. Yet despite this continually evolving landscape of innovation, the trusty mainframe still holds its position at the heart of the enterprise and is likely to continue its reign for many years to come. But with great power, comes great responsibility.

Our reliance on technology is deeply embedded, our expectations of what it can deliver are spiralling skyward and the consequences of failure can cause irreparable damage; so the mainframe has a heavy burden to bear. Despite this, the mainframe remains a mystery for many, meaning inefficiencies slip through the cracks, costs rise and problems take longer to resolve.

I’d like it done by yesterday please
Despite its well-established role within the IT stack, even the mighty mainframe has had to adapt and transform to survive in the modern day world. No longer is it hidden, delivering internal applications to employees who accept that it ‘will take as long as it takes’ to get an application running, or retrieve the data they need: it is delivering externally facing applications, and since the consumerisation of IT has made many people believe they are an IT expert, the burden of soaring end-user expectations is resting heavily on its shoulders. Managing performance, therefore, is more important than ever.

Not only is there more pressure on the mainframe to deliver at ever-faster speeds, but the increasing complexity of the IT environment means managing the mainframe environment is a whole lot more difficult. We now have a world of distributed computing on steroids: the introduction of mobile, cloud and virtualisation, and the explosion in new applications, devices and data services, is forcing the mainframe to integrate with a range of new technologies that simply didn’t exist when it was originally created.

The performance of cloud, virtusalisation and mobile applications and services are now intrinsically linked to the mainframe, meaning any issues at the core of the IT infrastructure can create a ripple effect across the infrastructure. Essentially, new technologies are not only increasing MIPS consumption on the mainframe through added usage, but there is also more demand for greater performance as the mainframe is forced to keep pace with new technologies.

The more links that are added into the application delivery chain, the greater the chance that something can go wrong. The integration of a new application can therefore present a variety of risks, where the smallest change can impact mainframe performance. Added to this, many companies still rely on averages to check for problems, meaning that they are often unaware of issues until the customer complaints come rolling in. This means that problem resolution can be extremely difficult, as it is hard to isolate where they have occurred and why. Companies can spend days; even months in ‘war-room’ situations with teams of people trying to figure out why a system is not functioning properly; draining the business of time, money, resources and skills, and stifling innovation.

IT is integrated; why aren’t your teams?
This fusion of old and new is a positive step in many ways, as it enables agility and innovation, while still being supported by a solid IT core. However, many of the people who wrote the original code for mainframe applications have moved on or are planning to retire, and the new wave of developers have honed their trade using modern SDKs: so there is a need to bridge this gap. Companies are still operating in silos, with one person looking after development, another for network, another for mainframe and so on. While this can make sense from a practical skills perspective, the integrated nature of the new IT environment requires similarly integrated IT teams to manage it.

These silos are causing a variety of problems, as people are so focused on their own piece of the puzzle that it is hard to step back and see the whole picture. This limited visibility makes it very difficult to understand how the different technologies are interacting with one another behind the scenes; for example, if a distributed application has been developed in a way that creates multiple transactions for the mainframe, thereby raising costs, or if a change in the server stack is having a knock on impact that is slowing down the mainframe. Some of these problems are highly visible, i.e. an application will just stop working, others are more subtle; for example, increased MIPS usage.

Another unpleasant side-effect of the silo debacle is the issue of ownership. If teams are divided then an “if the green lights are on, then it’s not my problem” attitude can prevail: instead of collaborating to find a solution there can be some ‘passing of the buck’, which means problems can take even longer to resolve. Lack of communication and knowledge sharing is also a ticking time-bomb when you consider the fact mainframe skills are depleting; what are companies going to do when there is no one left to run to for help?

Shining a light into the mainframe
As we can see, the world of the mainframe is certainly not as straightforward as it used to be; yet many companies have not adapted to these changes and are continuing to use outdated methods for performance management. While application performance management is a well-established necessity within the distributed environment, the mainframe is seen as an impenetrable black box; forcing IT teams to fish around in the dark to solve problems, with little or no visibility of how distributed applications are impacting mainframe workload. Now that the mainframe is being leveraged to support the distributed world, a new continuous transaction-based approach is needed which spans mainframe and distributed systems.

By having complete end-to-end mainframe application transaction visibility with continuous, real-time transaction monitoring, companies can bridge the gaps between the silos in their IT departments. Also, with deep root cause analysis, organisations can improve code efficiency and transaction response time and throughput, and locate and improve resource-consuming database calls. As such, organisations can benefits from MIPS savings through better mainframe resource use, and faster and more accurate troubleshooting; as well as enabling them to take a proactive approach to mainframe application performance management to prevent problems and fix inefficiencies before they create a visible issue.

Posted by Maurice Groeneveld, Vice President EMEA & Asia Pacific, Compuware