search
The opposite of fragile

Beyond Agile: Antifragility for Software Development

Gerrit Beine
Black Swan image via Shutterstock

Unpredictable, even damaging events occur frequently in agile software projects. This type of event is noted by Nassim Nicholas Taleb as a Black Swan, and we can make use of these events by harnessing Antifragility.

Within the last 15 years all software projects in which I contributed had one thing in common: after some time “The Requirement” appeared.”The Requirement” lead to the fact that all decisions made during the project’s lifetime, regarding architecture, data models, interfaces or other relevant parts of the software, had to be questioned. Sometimes,”The Requirement” could be discussed away, sometimes the stakeholders cancelled it. Often enough it was necessary to implement “The Requirement”. The resulting efforts were enormous – nobody dared to ask about economic efficiency.

As there are only very few really impossible things in software development, “The Requirement” had been realised, yet with some compromises. It was really interesting to see what happened within the projects in retrospect: after “The Requirement” had been implemented successfully, everybody knew who made the mistake – developers showed clearly that the mistake came from insufficient specification documents created by analysts. The analysts proved that the architects worked within a narrow field of vision while the architects on the other hand explained how project management was responsible for incorrect plans.

Interestingly, nobody had the idea of treating the arrival of “The Requirement” as a normal case. I thought the same way, until I started to read about thinking models and unpredictable and improbable events. I realised that afterwards, we are able to explain exactly why the appearance of “The Requirement” became critical for the project. But we were not able to foresee it, because “The Requirement” was a Black Swan.

The Black Swan and the Five Orders of Ignorance

The characterisation of “Black Swan” goes back to a systematic bias, which is appropriated to human beings: we usually conclude that our empirical experiences are the same as an unconditional truth. But a single discrepancy is enough to falsify this empirical knowledge. In the case of the empirical fact (all swans are white) the falsification occurred during the exploration of Australia. Down under, black swans exist.

The human bias, which often outweighs our empirical knowledge, leads frequently to the situation when unforeseeable events happen and we become victims of Black Swans.

Following Taleb’s definition, a Black Swan is an unforeseeable event with drastic consequences. These consequences can be positive as well as negative, but usually we recognise only the negative Black Swans. This bias relies on a thought pattern, that provides the foundation of our self-confidence: if something evolves positively, we attribute it to our competencies. In the other case, we look for an external cause for the negative evolution – we believe we are victims of some hard luck.

SEE ALSO: Everything flows, even for Agile and the Waterfall method?

Both considerations are typical for human beings, but in complex systems, both are in the same way wrong. During the development of software, we process knowledge and build complex systems from it. These complex systems shall support us with intellectual processes. Regarding the activity of developing software, there is one special aspect to consider: many professional activities, e.g. making a chair, depend only on our ability regarding this concrete activity.

In contrast, the activity of developing software not only depends on our abilities in programming, but also on our understanding of the knowledge domain for which we develop the software. But these knowledge domains vary widely from project to project, so that we can only benefit from our experiences within a very narrow magnitude. The result is a learning process we have to face in every new software project, which has been described by Phillip G. Armour as the Five Orders of Ignorance:

  • 0th Order of Ignorance (0OI) – Lack of Ignorance: This order of ignorance describes confirmed and proven knowledge. A typical example is our date of birth. Most people know this date exactly.
  • 1st Order of Ignorance (1OI) – Lack of Knowledge: On the second level, we are missing some information. But we know for sure where to acquire this missing information and how to use it. An example is the telephone number of a colleague. If no one knows it, the companies address list does.
  • 2nd Order of Ignorance (2OI) – Lack of Awareness: This order of ignorance describes the lack of knowing which information we don’t know. One does simply not know what he or she does not know. It’s obviously quite hard to find a matching example for this level. Instead a quote from an unknown source: If we would know what we don’t know, we would already know it.
  • 3rd Order of Ignorance (3OI) – Lack of Process: In case we have no way to find out if there is anything we don’t know that we don’t know in addition.
  • 4th Order of Ignorance (4OI) – Meta Ignorance: The fifth and final level is the fact that one does not know about the existence of the Five Orders of Ignorance.

The difficulty of this learning process is that we have to provide some kind of plan at the beginning of a new project. Planning itself can only deal with 0OI and 1OI. A new knowledge domain is in the first instance located on the second level of the order or, especially at the beginning of a completely new project, on the third level.

As plans cannot be created on levels such as 0OI and the 1OI, they are typically implementations of 4OI. The ordinary solution to deal with the ‘unknown unknowns’ is to build some buffers into the plan. We calculate their size from the efforts of the tasks on 0OI and 1OI. As there is no systematic relation between the requirements of 0OI and 1OI and the ‘unknown unknowns’, Black Swans are also built into the plan – they are unavoidable.

The dilemma of specification and anticipating programming

Of course, we know instinctively about a problem when we see such a plan. But we are not able to describe it – since the problem is on 2OI. Our reaction regarding such uncertainty is usually to collect all knowledge we’re able to collect. We specify the goal of our software project using the knowledge we have (0OI) and obtain the answers to all open questions we know (1OI). This behaviour works quite well and results in valid deliverables in most projects.

However, it’s not sufficient to eliminate all uncertainty. We know that, and this is the reason for recurring non-functional requirements e.g. the software must be able to deal with future requirements. Or, as another example, that the software design must guarantee that components are reusable. As long as it’s not written in any bill of quantity, there is at least an architectural instruction. The uncertainty that comes in the project because of the 2OI is delegated to a technical level. I refer to this fact as the dilemma of specification.

An effect of this dilemma is that in cases of huge uncertainty software developers urge to create generic solutions with a high level of abstraction. Unfortunately, this behaviour has a high influence on the didactics of informatics. The arguments supporting it are usually that future efforts can be reduced by enough forward-thinking work or by already taking future requirements into account that may materialise one day. This anticipating of future requirements is unfortunately one of the biggest traps for software developers. This attempt to escape the dilemma of specification points out how hard it is to deal with the 2OI.

SEE ALSO: Is Agile dead? The state of affairs in Agile

The only possibility that software developers see is to take the bull by the horns and to create facts in the described way. This limited perspective is due to the fact that developers are used to thinking on the 4OI in such situations. They are not aware of the Five Orders of Ignorance. And this is not without risk: in the case of technical topics the attempt to think ahead can be the foundation of a later occurring Black Swan or an intensifier of such an event.

With a little distance it becomes obvious that there is a second way to deal with the 2OI. This possibility is to do exactly the opposite of what is described above. Create as little facts as possible and do not try to predict future knowledge (the current 2OI) using knowledge from the 0OI or 1OI. This is easier said than done, as long as forward-thinking and well-thought structures are a symbol of quality amongst software developers. But the line of code that was never written can be the best of all, as it provides us with something very special for the future: options.

Read the second part here.

Author
Gerrit Beine
Gerrit Beine is Managing Consultant at adesso AG with an emphasis on agility and software architecture. He is a regular speaker at conferences and keeps himself busy with exceptional issues in software development.

Comments
comments powered by Disqus