Artificial intelligence has impacted our daily lives and has moved from research to the mainstream in recent years. In this article, we explore how artificial intelligence can solve some of the challenges in software testing.

A combination of advanced technologies like advanced machine learning, natural language processing, and predictive analytics have impacted the software development cycle and helped engineers build applications better and faster. As a result, AI can improve decision making, improve the consumer experience by improving the overall quality of application, improve productivity, and help reduce overheads. The first question that might come into the mind of testers when it comes to AI is, “Is AI going to replace humans? “. The simple answer is “No,” and the answer would remain the same for another decade. The currently available use cases around AI and testing only augment engineers with intelligence rather than replacing them.

You would have thought about why developers are not doing enough testing on their code at least once during your testing career. The primary reason is the lack of time to design tests. Writing a single unit test case takes somewhere between eight to twelve minutes. To get decent test coverage, a developer may need to write hundreds or thousands of test cases. As you currently work in agile sprints, three to four weeks in duration, developers do not get enough time to complete their coding. This results in primarily shifting the responsibility of finding defects to the test engineers within the team.

AI-based products like DiffBlue can analyze the project code and dependencies to build a map of methods and classes. After this, the unit test case is created for each method inside the application code. Then, you basically perform a test run to check the quality of these tests. Finally, a finalized set of test cases is created based on the test run results. This helps primarily in improving the overall coverage of unit testing. Still, you’re only investing a minimum of time to simultaneously design these tests. This is an excellent example of how AI can improve productivity around test design. It also helps you improve the overall quality of applications because you’re actually finding defects using white box testing techniques. This also allows you to reduce your overall cost of quality.

One of the critical opportunities for testers is to improve the pace at which feedback can be provided to the development team, especially around the quality of a build for testing. So what about a scenario where we can actually give the developer feedback on an application that has been sent for your testing in around 60 minutes? Currently, this is possible using autonomous bots like Certifaya, which can test mobile applications or multiple real devices and provide insights on functional and non-functional behaviors. The bots internally use AI technologies to crawl through the apps to identify workflows and validate the functionalities within each screen. And these tests are not created upfront and are actually built on the fly. The only input that the tester would have to provide is credentials like username or password.

And in addition to all these capabilities, the bot also gives you a detailed amount of information in the form of application screenshots and videos based on each of the testing sessions. And the reports prepared include information about critical issues such as crashes, memory leaks, high battery usage, and other scenarios. Many product companies are investing in building capabilities around autonomous bots. We expect to see more of these capabilities coming into action in the next couple of years.

How often do you review the production ticket that your application support teams are handling, or do you actually look at what your end customers are talking about your product on social media feeds like Twitter or Facebook? When this question is posted to testers, the standard answer is a big No 99% of the time. We tend to forget about the applications sent to production after the warranty support phases are over. But there are several use cases where AI can be applied, like analyzing production feedback and tickets to improve the overall quality of the application. Testing professionals can use the inputs to revamp the test strategy that one would have created for your application or product. For example, there is a lot of improvement that we can drive around defect analysis by using classification algorithms that would actually have the capability to classify defects based on the different functions of your applications. Or you can actually use algorithms to identify the best developer or support engineer who should be actively working on an issue that has been reported in production.

You can also go one step further to figure out the phase at which the defect was introduced into the product or carry out a high-level root cause analysis of an issue in production. Similarly, classification techniques and algorithms can also break down the end-user feedback based on the various application functionalities. For example, customer feedback can be captured from social media feeds like Facebook or Twitter. And if you have a mobile app, you can also look at the App Store as an excellent source for capturing your customer feedback. And the other technique that you can use to analyze customer feedback is to run customer sentiment analysis algorithms to figure out the happiness level of your customers.

But continuous monitoring of your customer feedback and tickets is an excellent technique that you can use to identify symptoms. Moreover, they can be extended to prevent such issues from becoming more significant later.

Have you ever wondered why you are executing thousands of tests to validate a small change? This is because we are not accurately identifying the impact of a code change that the development team was actually making. In addition, with the increase in DevOps adoption, one can expect more than a dozen build to come in each of the sprints you would be working on. So what happens is that you execute all your test cases on every build. As a result, you’re wasting a lot of your effort in conducting tests. Therefore, one can think about the alternate approach to taking a risk-based testing approach.

Running all tests may not yield any defects, and you end up leaking defects to production if you decide to go with a risk-based testing strategy. But products like Sealights use AI and predictive analytics to scan through the application code to understand the impact of changes and then recommend tests that you should execute. Sealights can continuously map billions of correlations between tests and methods using artificial intelligence algorithms. The test can be a unit, integration, API, or regression and manual or automated. As a result, you get a wide range of recommendations that would even consist of suggestions to create new tests.

The platform can also understand and recommend the dependent kind of test you should be running. It also can identify the test that needs to be done to test the impact that the development team has actually created. The overall benefit here is to cut down on the number of tests you’re running, which also helps you cut down your overall cycle time and cost. At the same time, one thing that we should not forget here is you’re not actually compromising on quality. You’re doing testing in a very scientific way by leveraging the capabilities of both AI as well as Analytics.

A lot is happening around the AI space, and it is essential to monitor the trends. We can expect several innovative platforms and solutions in the coming days that will use the capabilities of AI to improve the efficiency of software testing.

One Response