Teaching your old CI new tricks with CD pipelines

There is an old saying that you can’t teach an old dog new tricks (actually the original saying is “An old dog will learn No tricks” by Nathan Baily, 1721) and while that saying isn’t actually true the sentence isn’t meant to be taken literally. A related synonym is “old habits die hard” – and we see this playing out in the rapidly evolving software development world every day as we meet team after team who have taken their CI tool (most frequently its Jenkins) and attempt to “teach” them how to do CD, I’ve seen it referred to as a “duct tape” approach. But there is a better way to do this that was actually designed to help organizations reach the ultimate goal of continuous dev/test/deploy/repeat. And for the record, we love Jenkins here at the Continuous Delivery Director team inside Broadcom. We use Jenkins, but we don’t try to make it do a job it was not built for (oh, and we definitely embed it into that “other” Continuous Delivery Job). I believe the core of the entanglement we see in the market stems from the use of the term “Pipeline” in both CI and CD. And this is because, both CI and CD are two very different types of pipelines for two different purposes. They are connected and interdependent, but they shouldn’t be confused as doing the same thing. Let’s dive deeper some:

We’ve written in the past on the differences between what Continuous Integration (CI) does and what Continuous Delivery (CD) does. Which you can summarize like this:

  • CI – Take code from developers, test it, build it, output software artifacts.
  • CD – Deploy artifacts across dev/test/qa/production, test, verify and rollback as needed at every stage.

What we want to focus on today is the actual shortcomings of attempting to build a CD pipeline with Jenkins and the actual benefits of constructing a more intelligent pipeline for CD (with CI as the first step in the process).

A Jenkins pipeline, as defined in the Jenkins documentation (here) like this:

Jenkins Pipeline (or simply “Pipeline” with a capital “P”) is a suite of plugins that supports implementing and integrating continuous delivery pipelines into Jenkins.”

Now plugins in and of themselves are not necessarily a bad thing, of course, but there are shortcomings.

  1. Open source and multiple plugins are available for the same “headline” (i.e. Docker, Kubernetes, etc…).
  2. Each plugin can have any number of other plugin dependencies.

Maintaining them can be a daunting task, especially when everything changes so fast these days and numerous changes (in plugins and plugin-dependencies) can render your pipeline inoperative at any given moment.

An even bigger missing piece is that Jenkins was designed to do CI and not CD so you have to “teach” it many concepts for it to attempt CD (I am talking about “application”, “environment”, “approval”, “verification”, “rollback” … the list goes on) and you would have to somehow enable it to be flexible across stages and environments (i.e. adapt user credentials and secrets, understand the difference between a container and a different kind of artifact, etc..). The way people do this in CI is by scripting these things, which is extremely complex, and eventually leads to an almost hardcoded “pipeline” which requires constant maintenance. Many find they are spending more time maintaining the pipeline then actually building and shipping code. Finally, even if you’ve accepted this reality with a Jenkins pipeline you are still going to hit the wall! The application and testing environments are quickly growing so large and have so many moving parts (artifacts, containers, environments, test tool, and test types, deployment strategies, rollbacks, and roll forwards..) that many are now simply unable to “catch up” – which brings me to the final point.

We are all at the edge, some have already crossed it, of not being able to maintain a CD environment without the aid of an algorithmic AI. Helping us manage the growing stream of flowing features, functions and apps. Which brings me to the final conclusion:

  —- “To scale agile, you need an intelligent pipeline.” —-

The intelligent pipeline is a north star concept and aspiration that we use at Broadcom to align our teams and products and has become a tangible product also with the latest release of Continuous Delivery Director. We are constantly working on embedding more and more intelligence into every piece of our CD solution and other offerings.

The idea is of an intelligent CD pipeline is:

  1. Specifically built for CD.
  2. Has built-in AI capabilities to allow it to scale across any size of organization and any level of complexity.

A couple of additional points to consider.

  • Built for CD means it includes all the concepts needed to model a CD pipeline natively (as opposed to having to programmatically “teach” Jenkins or other CI tools). This includes many things such as “Environment”, “Integration User”, “manual step”, “test suite”, “plan versus actual” the list really goes on and on but you get the picture. And,
  • Has built-in AI capabilities such as predicting failure and proactively taking action at any stage of the Continuous Delivery chain (not CI..), allowing the pipeline to make the choice of which tests to run at each stage based on its ability to identify (amongst other things) which tests are not relevant and which are the most relevant, self-monitoring of every stage in the pipeline. including in production with an integrated AIOps capability.

In summary:

There is nothing inherently wrong about using Jenkins for CI, however, bending and stretching it to attempt enterprise-scale CD might be an inevitable futile attempt to “teach an old dog to learn new things”. The best way forward is to lay out your strategy and plans for scale including carefully designated CI and CD solutions working in unison.

What to learn and experience more?