4 reasons why Jenkins Pipelines are brittle

Don’t use Jenkins Pipelines for continuous delivery

Steve Burton
© Shutterstock / Zivica Kerkez

Jenkins may seem all-purpose, but it’s really not. Just because you think you can use Jenkins Pipelines for continuous delivery doesn’t mean you should. Steve Burton explains why Jenkins Pipelines can be surprisingly brittle and how developers can get around this hurdle.

It’s amazing how many companies have tried to execute continuous delivery using nothing but Jenkins Pipelines, and as a result are in a world of pain. Just last week I was on a call with one of our customers about what life was like trying to use Jenkins Pipeline for a 20+ microservices application.

The customer said, “It was painful–actually, it was a lot of pain.”

I’ve heard another customer describe Jenkins as “DevOps duct tape.”

Jenkins documentation defines Jenkins Pipeline as a suite of plugins that support implementing and integrating continuous delivery pipelines into Jenkins. And this leads us nicely into the first reason why Jenkins pipelines are brittle.

1. Jenkins pipeline relies on (lots of) plugins

Now, before you slap me with a wet fish, allow me to explain–because I love a good SDK just like the next developer. Extensibility is a good thing in software. BUT it can also be a bad thing when extensibility isn’t managed or structured properly.

For example, let’s imagine I have a simple Docker application and want to build a simple deployment pipeline using Jenkins. I go to to the Jenkins plugin site and search for “Docker” plugins and get:


Figure 1: Docker plugins on the Jenkins website

It’s 26 different plugins related to Docker, sorted by relevance. In fact, someone was even kind enough to name their plugin “Yet Another Docker,” which is about as helpful as a poke in the eye with a stick.

Maybe I’m being overly dramatic here, so let’s just click on the first plugin called “Docker” and see what’s what:


Figure 2: The Docker plugin

Unfortunately, at a second glance we hit the second reason on why Jenkins pipelines are so brittle; I spot eight plugin dependencies with another seven optional dependencies. You can see where I’m going with this.

In a world where apps and services change more frequently than Tesla’s stock price, it seems logical that plugin dependencies could be problematic as technology stacks and plugins naturally evolve. The last thing you want is a broken deployment pipeline because the pipeline itself is broken vs. the actual software artifact or build that’s being tested.

Simply put, which Jenkins pipeline plugin should one use? And what happens when one needs to upgrade plugins and maintain version dependencies?

Do I need a plugin for DockerHub, Docker, Kubernetes, and every other technology stack or tool my app interacts with? How many plugin dependencies do you think this list of plugins would have? Over 100 isn’t unrealistic.

SEE ALSO: “In the history of the Jenkins project, we’ve had 8 vulnerabilities reported with co-ordinated disclosure deadlines – all in the second half of 2018.”

2. Jenkins pipeline relies on jobs/scripts

Let’s put aside plugins for a second, and focus on the core concepts or entities of a deployment pipeline:

  • Build or Artifact (e.g. containers, AMI, Function)
  • Application/Service
  • Environment
  • Variables & Secrets
  • Deployment Workflow
  • Stages
  • Steps
  • Trigger
  • Approval
  • Verification or Health Check
  • Rollback
  • Release Strategy (Blue/Green, Canary, etc.)
  • Tests & Tools (test, security, monitoring, etc.)
  • User Groups & Users (RBAC)

How many of the above actually exist today as first-class citizens in Jenkins Pipeline? More importantly, how many of them require you to write Jenkins Jobs or Shell Scripts to manage or perform the above tasks?

Answer: nearly all of them.

SEE ALSO: Jenkins gets new cloud flavor: Jenkins X promises to automate CI/CD and DevOps practices

A Jenkins Pipeline is another way of saying: “I hardcoded my deployment pipeline with scripts.” Hardcoding things like environments, variables, secrets, dependencies and release strategies would have worked back in 2008 when all you had was an apache web server, a few instances of tomcat/weblogic/websphere, and a chunky Oracle database. However, it doesn’t work in 2018 with public cloud, containers, and microservices. You’re going to spend more time maintaining your deployment pipelines than actually deploying new versions of your app.

In fact, it’s not uncommon for customers to have a team of DevOps engineers solely focused on maintaining deployment pipelines as applications, technology stacks, and tools change every week. Instead of DevOps teams adding innovation to deployment pipelines, they spend their time fixing or dealing with pipelines that break. This is not good.

The more you script the core components of your deployment pipeline, the more you maintain those scripts as applications and technology changes.

You want your deployment pipelines to be dynamic in the sense that they can automatically adapt or change based on the meta-data that exists in your cloud provider or DevOps tool ecosystem.

In addition, the majority of Jenkins pipelines I’ve seen still prompt the user for manual inputs like “Please provide the Major and Minor build version you would like to deploy.” Why can’t this information be dynamically polled from the build or artifact repository?

Shell scripts by nature will make your deployment pipelines brittle. The more complex your deployment pipeline, the more brittle your pipeline will become.

SEE ALSO: Continuous delivery with Jenkins: The good, the bad and the ugly

3. Debugging a Jenkins pipeline is a PITA

How about we ignore those pesky plugins and sexy scripts for the time being?

Let’s imagine we’ve hard-coded the best deployment pipeline using Jenkins Pipeline for our application that has 20 microservices, with each microservice having a dev, QA, staging, and production environment.

Quick question: Do we need one Jenkins pipeline or 20 pipelines for our application? And can those pipelines be executed in parallel?

Anyway, let’s imagine we push our big red “Build Now” button to kick off our deployment pipeline and get this visual:


Figure 3: A Jenkins pipeline example

The above screenshot doesn’t look too bad at first glance. We can see our Jenkins pipeline has 10 steps, and they’re all green. But notice the long list of “Shell Script” executions, and specifically the lack of context those windows have. What exactly are those shell scripts doing and telling the user who is watching the deployment?

What if those scripts fail? Are we able to understand in detail what happened? Or are we simply just seeing a console confirming that a shell script executed?

Again, not a massive problem if our application is relatively simple with a few microservices/environments. But this could be a world of pain if we have to observe 20 different pipelines, each with hundreds of shell script outputs.

Understanding the real status and health of a Jenkins pipeline (and your application) isn’t an easy task. One customer told me, “Jenkins Pipeline really lacks any sort of global context, dependency model, or high-level view of a deployment pipeline in a microservices world.”

SEE ALSO: Jenkins community survey: Kubernetes usage rises 235% & Java crowned most used language

4. Deployment verification and rollback

Simply put, continuous delivery is not just about deployment. Everyone does continuous delivery for a reason, normally to increase the velocity of your innovation and business so customers spend more wonga.

Not knowing the business impact of a deployment in production is a big deal. Not having a rollback strategy is also a big deal.

I’m not talking about running a Jenkins Job to run a simple load test or unit test. I’m talking about observing how customers are impacted by a production deployment and managing that risk in real time. At my company, we call this Continuous Verification and Smart Rollback.

To achieve this in Jenkins Pipeline you have to write a Job or shell script. Why can’t Jenkins Pipeline just integrate with your monitoring ecosystem and automatically tell you whether your deployment was successful? It’s possible.  Just remember that knowing the success of a deployment isn’t a deployment step or job completing. Rather, success is a deployment having a positive impact on your business.

Conversely, a deployment having a negative impact on your business makes your deployment pipelines brittle. It leads you into a false sense of security that everything with your deployment or application is OK–when actually it’s not.

SEE ALSO: GitOps, Jenkins, and Jenkins X

CI != CD

Look, I admit it: Jenkins is unquestionably top dog for continuous integration (CI) use cases. But continuous delivery is way more than the building and testing of code into artifacts. Continuous delivery is about taking those artifacts into production where they will delight the customer.

Remember, Jenkins Pipeline is just a suite of plugins that allow you to build your own CD platform. This means that you’re actually the one coding and maintaining your CD process as your applications and services evolve.

Today, many real continuous delivery solutions exist that integrate with and complement Jenkins for CI. Harness is one, but if you search you’ll find others. Try them! Don’t be that customer or team that thinks building your own CD platform is doable with DevOps duct tape.


Steve Burton

Steve Burton is a DevOps Evangelist at Harness. Steve has held VP Marketing positions at Moogsoft and Glassdoor, and also ran Product Marketing at AppDynamics where he helped disrupt and transform the application performance management (APM) market. Stephen has also held senior product management and pre-sales positions at Symantec and VERITAS software. Stephen’s career started at Sapient where he was a Java developer working on large scale enterprise J2EE implementations. During his educational years he also had several internships at Kewill Systems and Oracle Corporation.


Stephen holds a Bachelor’s degree in Computer Science from Lancaster University. He enjoys playing golf in his spare time. Follow him on Twitter @BurtonSays.

Inline Feedbacks
View all comments
2 years ago

This looks more of marketing article and does not make for good justification. Second it seems writer is more frustrated with Jenkins popularity and plugins model. DevOps is not same for each application and that flexibility is given by jenkins and its plugin structure. Pipelines give more flexibility. Just take chil pil and sell products on its merit.

2 years ago

This is an ad not an informative article. The bar must be pretty low for getting published on jaxenter. You should have introduced yourself as an employee advocate of your product first then discussed the SPECIFIC advantages of your product over your competitors- the way this was written is just shitty

2 years ago

This article made much more sense after coming across the author’s profile… “Steve Burton is a DevOps Evangelist at Harness.” Credibility lost instantly.

Arun Kumar
Arun Kumar
2 years ago

☺️ I think this post is to promote the harness, We are doing multiple continuous delivery practices using Jenkins swiftly and the things which discussed in blog is the nature of product Jenkins which is already in service for infinite devops practices.

More we can only do is minimize the plugin dependency as much we can and code as much as possible either for infra, CI or CD and other Cs

2 years ago

I have no idea what these other comments are talking about. I never noticed anything being pitched when I read the article.

Seems like the author is being trolled for telling the truth.

2 years ago

So, how about offering up some alternatives? If Jenkins is so bad, there has to be something better, right? Dont get me wrong… I mostly agree with the Jenkins issues outlined above, which is why I am here. I was hoping the author would offer up something more than “bad Jenkins”