days
-1
-8
hours
-1
-7
minutes
0
-7
seconds
0
-8
search
Nothing AI test analytics can't fix

The use of AI test analytics will yield significant results in 2018

Ken Drachnik

© Shutterstock / Snopek

Many people are under the impression that AI is here to replace human testers. In this article, Ken Drachnik, Director of Product Marketing at Sauce Labs sets the record straight and offers his predictions for 2018.

Today, the use of Artificial Intelligence (AI) in testing is being explored as a way to develop, run and analyze test results. Given the current distress around automation, many worry that AI might replace human testers. On the contrary, AI is actually at its best when designed to automate routine tasks, such as parsing data and showing trends. By eliminating rudimentary assignments, automation makes testers’ lives easier long term. It’s clear that the combination of AI and human insights together has the potential to provide vast efficiencies.

Lessening the load on QA departments

AI can easily surface issues within tests without the involvement of QA staffers to pinpoint problems. It effectively extends analytic functions to include not just reporting, but test-data analysis and prioritization of flaky and buggy tests. This can cut down on cyclical review time and speeds up deployments, pushing efficiency boundaries and accelerating release times.

Test result enlightenment

Along those same lines, as software development has progressed to continuous integration and delivery as well as DevOps, releases have simultaneously sped up — at the expense of visibility. While using best-of-breed tools offers flexibility, it often sacrifices tester’s integrated view into results. To rectify this, analytics brings clarity back by surfacing issues into organization’s release process, allowing companies to make informed decisions to optimize development costs and improve business outcomes.

SEE ALSO: How AI will impact software development

Teaching dashboards to be smarter

Reviewing tests in dashboards has become commonplace in helping recognize trends. As currently constructed though, dashboards tend to be passive in that they simply report data without any intelligence behind them. Leveraging AI (in even a basic sense) can examine test runs over a period of time, identifying trends by browser/OS and pinpointing those combinations that cause a test to vary. Rather than manually parsing data, AI can automatically call out patterns of failure that indicate a flaky test in a familiar location.

Denoting beyond pass/fail

So that more tests pass, AI can be used to grade tests before they are run. An example of this, one common occurrence in Selenium functional testing is the improper use of implicit and explicit waits. For instance, implicit waits call out a specific time to wait for an element to load. Often, tests fail because a call times out during the prescribed period. Whereas an explicit wait is an intelligent command that waits for an action, rather than a duration of time.

Comparing the two, it is much more likely to increase the chances of a test passing with an explicit wait. In this case, AI can review tests before they run, and make a recommendation to replace an implicit wait with an explicit wait resulting in a more reliable test.

At Sauce Labs, we foresee the use of AI test analytics as a reality that will yield significant results in 2018.  Starting with the simple step of correlating test failures to common errors and grading tests, the power of AI will eventually help testers make faster decisions and run more successful tests.

Author

Ken Drachnik

Ken Drachnik is the Director of Product Marketing at Sauce Labs.


Leave a Reply

Be the First to Comment!

avatar
400
  Subscribe  
Notify of