Automated software testing is just like every other technology in that it does take a lot of effort to be successful. It is just another strategy that can be easily incorporated into the software testing life cycle and network security, but there are issues that come from testers trying to figure out how they are able to build it into a process and then determining which tools or solutions are right for their company. Over the past couple of years, product testing went from being 100% manual to incorporating automated software to make the tests faster. With agile, mobile and developers as driving forces, the payoff and ROI for automation which was promised years ago is now finally happening, according to Dan McFall who is the Mobile Labs Vice President of Mobility Solutions.
Nowadays businesses are moving to faster releases which means that the time to market has changed. Anand Kamat, who is a group program manager at Microsoft, reckons that getting a software idea to actually become a reality in a very short timeframe definitely puts a lot of pressure on organisations. Kamat says that the main focus is of course on automation, but manual testing is still very relevant and it takes many different forms in agile and developers are continuing to integrate more automated tests.
The dangers of automated testing
Automation doesn’t suit all companies. It has to compliment the test automation metrics with the data from the manual testing, user acceptance testing, exploratory testing, and testing in real-world customer environments to form an inclusive view of the quality of the product says Kamat. Automation does not cover 100% of use cases, but having a consistent success rate of 100% can give you a sense of “false hope” that is fatal.
Code coverage is not at all a reliable metric for ensuring engineer-to-engineer quality, but it is quite often used as a measure to judge the usefulness of test automation. For linked applications involving multiple components, your “one-box” setup is not a real-world situation. If you are not testing in an “integration environment”, you are not testing with right dependencies. With there being regular changes in user experience, ROI on UI Automation might be limited for multi-channel applications. Having a services/API testing strategy combined with experimental testing might be a better alternative.
Speed over quality
Due to the rise of automated tests, the need for speed continues to stay on companies’ minds. The worldwide product marketing manager for HPE, Matt Brayley-Berger, stated that his IT network services company likes to ask the question “would you like to have better quality or faster speed?” They thought about the question themselves and then finally reached a consensus that organisations are finding numerous ways to eliminate a lot of the barriers that would have already lowered quality, according to him. This means that they can still focus of the speed of release without sacrificing quality. And, if testers became more technical and had to work with development more closely, it forces the developers conversation sooner.
Brayley-Berger once said that “It’s not the solution, but maybe that behaviour is creating an environment to have more productive conversations with evolving testing and evolving quality.” The shortcuts that take place during the development can take a one-month turnaround to a one-week timeframe by doing a smaller number of tests. According to Walter Capitani, who is a product manager at Rogue Wave, this is exactly what companies are doing. He said that they take shortcuts in their software testing by dropping the number of tests or by doing things after they’ve finished the release because they figure if they find a problem with quality, they can always “patch it later”.