The CI/CD pipeline, or deployment pipeline, is the link between a developer’s laptop and code running in production. Every piece of production code must travel through this pipeline, and so it acts as a gateway and checkpoint for anything that’s meant to reach the customer. Despite this fact, these pipelines are often built haphazardly and left neglected.

What may start as a single CLI command acting on a git checkout invariably gets built up by layers with linters, static builds, test suites, and multi-phase dependencies on other build or deployment jobs. When left unchecked every pipeline eventually becomes slow, unwieldy, or difficult to alter and extend. Unfortunately, the common practice of refactoring a code base is often overlooked in the context of a deployment pipeline. This leaves an integral part of the value delivery process unreliable and difficult to operate, which can have a subtle but insidious impact on the development process.

Consider an ideal pipeline: it’s fast, reliable, easy to understand and amenable to extension. With a pipeline like this, the time required to run a deployment is low and can be done frequently with a high degree of confidence. Jez Humble and David Farley discuss this ideal pipeline in the foundational book “Continuous Delivery”. A team working with a pipeline like this could develop a feature, have it approved by QA, and released to customers all in the same working day. This allows the business to iterate faster on customer feedback, and move on to the next killer feature more quickly.

However, more often we see non-ideal pipelines. Without direct care and attention paid to a pipeline, it will always gradually grow in complexity and become difficult to use or modify. Rather than the “rapid, reliable, low-risk delivery process” described by Humble and Farley, we’re left with a process that’s “painful, risky, and time-consuming.”

Now consider a painful and risky pipeline. Perhaps it fails deployments intermittently for no good reason, or is particularly slow, or will complete a deployment ignoring signs that the current release is broken. In that case we can no longer deploy without fear, instead we need orchestrate planned releases. Since these releases are time consuming we tend to lump them together to avoid duplicated effort. Since they are risky we’d rather not do them during business or peak hours. Often this results in establishing a “release weekend” policy where large bundles of changes are deployed together during off-peak hours.

Now, instead of confidently and quickly deploying small sets of changes daily (during business hours), we’re having review meetings and coming in to the office on Saturday to collectively cross our fingers while a slow and risky deployment is executed. In the best case scenario this wastes a few hours of several engineers’ and managers’ time. In the worst case scenario something breaks, and now we’re left scrambling to roll back the deployment and undo any damage it may have caused.

Once that’s cleared up we have to make a decision: identify and revert a subset of changes to run the deployment again, or abandon it entirely for the next release. In this way a small, unimportant change that breaks the deployment could cause a valuable and functional change to be postponed. With careful attention and design any deployment process can be fast and reliable, allowing us to deploy small changes more frequently with less risk.

Still, even a fast and reliable pipeline can have non-obvious problems that slow the pace of development. Consider a process which appears to work well for existing projects, but is difficult to extend with new build steps or code bases. A new application or service can lie in wait for days or weeks while an engineer deciphers a legacy deployment process to implement a new pipeline for this project. Again this delays realizing the value of the time invested in development, and lengthens the feedback loop for iterating on the new project with real world data and customer feedback.

Similarly, a pipeline that is difficult to modify may prevent developers from adopting new development tools, such as static analysis or automated testing tools. These tools add little value if they’re not run automatically when changes are made, and that automation can be made difficult with a pipeline holding too much technical debt.

Ultimately a modern CI/CD pipeline is itself a software project. It needs design, revision and refactoring like any other application. It should be performant, reliable, extensible, and easy to modify. While a good pipeline will only see marginal gains from improvement, a bad pipeline can have a severe impact on developers, customers, and the business. A development team operating with a bad pipeline has to follow a qualitatively different process than one with a smooth and well-built pipeline.

With modern tools and thoughtful architecture, any application can be deployed with a pipeline that’s fast, reliable, and low-risk. A well-built pipeline will stay out of a developer’s way and let them focus on delivering value through software development.


Need help with modernizing your deployment pipeline? The CI/CD experts at StackOverdrive.io DevOps can review your current process, then propose and implement the changes needed to optimize it for developer productivity (and happiness).


Back To Insights