Use data to avoid releasing buggy software

Bringing structure to the software release process

Jan Wolter
software release
© Shutterstock / Artur Szczybylo

To release or not to release? That is the question. Often, however, the decision is made on a gut call rather than concrete data. Having access to data, along with a structured deployment process can help avoid releasing buggy software.

Let’s be honest: What criteria does your company use when looking to release a software update? If you don’t have a solid answer, you’re not alone. Too many teams do not have clear criteria that define when an update is ready for release. But in times of DevOps and CI/CD, developers and project managers should define clear decision-making processes and agree on uniform standards for release. If there are no clear processes for deciding when a software build is ready for release, companies can suffer from critical bugs in production.

SEE ALSO: The Difference between DevOps, PrivacyOps & AIOps

Processes differ based on the company type

How to deal with software releases varies greatly from company to company. Companies with large user bases, nearly unlimited resources and infrastructure are able to release quickly, and pull back releases if necessary. In this way, updates are made available to a small part of the user base at a higher frequency, and the systems automatically detect when something has gone wrong so that the previous version can be used. Such companies can take more risks because of this approach, but they also require an incredible amount of infrastructure as well as DevOps hosting and a very large number of customers. More traditional companies, on the other hand, tend to be very risk-averse and follow a structured, small-scale process for quality assurance. There is a tendency to produce few, comprehensive and well-tested releases that have been built using classic development methods. However, the agility required to keep up in competitive markets suffers here.

Neither the risk-averse nor the risk-taking method is therefore applicable to all firms. Thus, development and engineering teams must establish structured deployment processes across the board and optimize the publishing process.

The test scope varies

With regard to this optimization, the focus is primarily on the test scope. Here, it is necessary to weigh the pros and cons: What prerequisites must be met to ensure that the product or service is as error-free as possible? Of course, especially critical functions of the software must continue to function smoothly and must not be negatively affected by an update. At the same time, the resources required for testing must be evaluated in terms of time and money. Unit tests, for example, usually only check a few lines of code separately, such as the correct implementation of specifications for text input. One level above this, integration tests are used to test multiple components in a network. These are important to minimize the risk of side effects. And testing at system level is even more complex and time-consuming.

However, in addition to these standard tests, it may also be useful to include other test types. For example, exploratory tests, in which functions are not tested following a script, but by a tester trying to “break“ the experience.

It is generally recommended to test as extensively as possible early in the process. This reduces the runtime of the individual test and the frequency can be increased. However, in order to carry out tests not only from the developers’ point of view, but also from different perspectives, exploratory tests using testers that represent actual users should be done.

Tips for structuring deployment processes

This mixture of automated and manual testing makes it more difficult for product managers to determine the right time for release. Structuring the process can help to save time and money. The following measures support this:

Quality-Gate-Processes (QGPs): Projects are divided into phases, with a quality gate being set at the end of each phase. In contrast to milestones, these quality gates are always set uniformly in order to ensure that the same formal quality assurance processes are always maintained throughout the project. The aspects to be checked are recorded in a clearly defined checklist to ensure consistency. Based on the evaluation, decisions on how to proceed further are made: “Go” – transition to the next phase; “Hold” – improvements within the phase are required because not all aspects are fulfilled; and “Stop” – where it must be clarified whether the project must be aborted or whether it is possible to make extensive improvements to maintain it.

Timeboxing for non-automated tests: Exploratory tests are crucial, because if the bug is discovered by customers after the release, it is too late. However, to keep the extent of such testing within limits, companies can make use of project management methods. Timeboxing, for example, defines a timeframe within which such exploratory tests can be carried out. The goal is to test as much as possible in the given time and then to strictly finish the test – no matter how far you have come. If obvious aspects remain open, these must be moved to the timebox of the next phase.

Bug triage: No matter how well prepared you are, errors are inevitable to a certain extent. Therefore, a process should be defined in which bug assessment can be operated. Triage is all about deciding which bugs are most critical and then fixing them immediately – i.e. as a hotfix. Bugs with lower urgency should be weighed depending on how critical they are to the overall experience. However, teams should always be careful to keep the backlog to a minimum to avoid regression errors later on.

SEE ALSO: DevOps in 2020 – our big DevOps survey

Data-based decision support

Development teams are under enormous pressure, as management and external stakeholders often push for agile and faster software development to go along with high quality. In order to make the decision of the project manager to “go” for release more comprehensible, the use of quality scores, such as the Applause Quality Score (AQS), can be helpful. Based on historical data and current metrics, software quality can be made measurable. Such metrics include, for example, test scope and coverage, error frequency and severity of bugs. Uniform scores also provide a less subjective decision-making aid and allow benchmarking, thus helping teams make go/no-go decisions much easier.

Jan Wolter
Jan Wolter is General Manager EU at Applause.

Inline Feedbacks
View all comments