It's no surprise that security and application development teams often find themselves locking horns. One wants applications and new features to roll out -- and swiftly -- and the other is often more concerned with keeping systems and data snug.
At some organisations, as they embrace more agile development and continuous integration/delivery methods, the tension runs even higher.
In continuous integration and deployment environments, teams integrate their development work continuously. Automated tests help to identify errors as work is completed, and these automated tests often include code analysts and functional testing -- all occurring on a deployment pipeline.
The problem is that these teams move rapidly and if their processes are not well established and proven to work, they end up automating bad processes. This, in turn, creates more mistakes, racks up more technical debt, and even introduces security vulnerabilities in the process.
The challenge for organisations develops as they move to swifter and more agile development programs. Then, the demands increase on the security, engineering, and quality assurance teams. "Developers expect much more self-service [when it comes to testing], and they expect to be able to operate with a much tighter feedback loop. They don't want to have to commit code and wait till the next day until a 12-hour set of tests has finished running before they find out whether it's any good or not," says Nigel Kersten, the CIO at Puppet Labs.
The rise of shadow QA
What has happened in recent years is that developers began to create their own "rogue" testing environments, what Kersten calls "shadow QA." This, in turn, has made it possible for software testing to move swifter. "In the last year or two, we're seeing a trend where quality assurance and quality engineering teams are being forced to provide faster feedback and more self-service to the development organisation," he says.
Fortunately, as enterprises get more experience with continuous integration and delivery processes, the software development and automated testing also improves. "People are getting smarter when it comes to launching software projects," says Chris Cera, CTO at product design and development firm Arcweb. "If you looked at how many software projects failed 15 years ago versus how many fail now, I'm positive that, as an industry, we're learning how to increase the success and minimise the risk of projects."
Kevin Behr, chief science officer at the IT consultancy Praxis Flow agrees, and says that the benefits of continuous testing being done right include helping to improve the building of inherently secure software. "There are a lot of opportunities for security to plug into the existing continuous delivery frameworks right now, when it comes to software code testing," he says. "You not only can inspect the software more often, but will also find issues faster."