The software world is filled with conflicting and confusing terms and terminologies. Making it sometimes very hard to tell the differences and similarities between separate technologies and different products. The continuous delivery realm is no different (For example, here is a blog we wrote on the topic of continuous integration vs continuous delivery https://www.ca.com/en/blog-automation/standing-up-for-continuous-delivery.html)
Now in the example above, the difference between continuous integration and continuous delivery can be relatively easy to explain for the technically savvy crowd but it certainly can still be difficult for the non-developer to fully grasp. Things can get even hairier when the use case is the same but the user (or “persona”) or the technical stack is all that is different. Spinnaker (https://www.spinnaker.io/) is a “Continuous Delivery for Enterprise” product (that is what it says on the tin/homepage for it), and, our own Automic Continuous Delivery Director (https://cddirector.io/) is also a continuous delivery for enterprise solution too – Surely we are direct competitors, right?! Well… Not really, In fact, there is a perfect compliment between the two and it”s the high level messaging that’s similar because the use case is the same (but believe me messaging is even a harder nut to crack I will leave that discussion for a different blog). I”m going to use this short blog to try and clear up the relationship between the two products and explain why we don’t consider Spinnaker competition and why we actually love it.
As I hinted before the key to understanding this has to do with the consideration of different technical stacks and different personas that these two solutions target and finally it will become clear how the combination can be really winning for an enterprise. Let’s start with the technical stack. Spinnaker was built with containers and cloud architecture in mind, it was built for these architectures and that is evident in its functionality and terminology. Spinnaker natively supports Azure, GCP, AWS, and OpenStack. Spinnaker’s application management features can be used to view and manage your cloud resources. Modern tech organizations operate collections of services — sometimes referred to as “applications” or “microservices”. A Spinnaker application models this concept. Applications, Clusters, and Server Groups are the key concepts Spinnaker uses to describe application services. Load balancers and Firewalls describe how these services are exposed to users. For application deployment spinnaker has built-in concepts for “pipelines”, “Stage” and “deployment strategies” – all of which are cloud-native focused (for example “red/black”/”blue/green”/”rolling red/black” etc..). This is all great for cloud-native, container-based applications but tries to force this approach on a legacy application’s tech stack (databases, application servers, web servers, etc..) and you are out of the water in a split second… In much the same way Application Release Automation / Orchestration that was built for the legacy stack quickly become unpleasant to use when squeezed to automate a cloud-native microservice application – water finds its way!
The other big difference between modern/cloud-native applications and legacy ones is in the dependencies between components and personas involved. A cloud-native application has far fewer dependencies between its technical components (each is a “service” in a container) and far less reliance on operations people for deployment, no databases need to be recycled, no application servers restarted, no database administrators and network people to coordinate for a deployment, etc.. In this (“DevOps”) environments it’s the developer persona that actually handles stage promotions (because the tech stack creates an image and the runtime handles the rest for you…) and because each service or function can be usually replaced/updated without dependencies on everything else (of course there are still some dependencies but these are far less rigid) Here is a high level view of what a typical release cycle looks like in a microservices / cloud-native world:
In contrast, here is an example workflow for a legacy application deployment:
As you can see the workflow (wither manual or automated) is more complex and branches off more (and I assure you this example is a very simple workflow, real-life examples can be orders of magnitude more complex) – the reason for this is the technology stack and architecture requirements of the legacy/monolith applications.
Hopefully Iv”e gave you enough of the understanding of how the technical stack changes everything although the use case for both cloud-native and legacy applications in our discussion is the same – continuous delivery and deployment.
This brings me to our next denominator the persona –
- In a cloud-native world a shift has occurred and since the individual components are a lot more robust and a lot less dependant on the rest of the application developers actually handle the entire release process across dev/test – production sometimes still requires approval but technically is just another stage. This is where the term “DevOps” was born, the new world brings with it a new persona who is a combination of development and operations – for the record though, operations people are not redundant in this case, their responsibilities have shifted (beyond the scope of this blog but think, scaling, monitoring, security and more) and do not include a great stake in deployment anymore. Please note that this is largely due to the loosely coupled nature of the new stack. The fact that many services can be evolved without dependencies on other teams makes this possible (and for the record, even in a serverless environment this is never 100% true but it’s closer to that).
- In a legacy world application components are developed by different teams on different infrastructures (databases, application servers, message busses and packaged applications) – release and deployment involve a lot more dependency management and careful coordination to release safely, not to mention the cost of failure is potentially higher because faster rollback/roll forward strategies for remediation are a lot more complicated.
So where is the catch?
The “catch” is that for the enterprise there is no “either-or” reality, every enterprise in the world is hybrid when it comes to their overall technology stacks, every enterprise in the world has some cloud native applications and some legacy applications and in many cases there is a dependency between the cloud native applications and the legacy backend so when you consider the original use case for continuous delivery your strategy for it on an enterprise-level cannot be complete without considerations for both your stacks. It is still the same use case, it’s all continuous delivery but with a requirement that is wider than cloud native-developer managed applications and legacy applications with clearer development and operations separation – in the enterprise world a business requirement many times “drags” behind it some work in cloud native and some work in legacy applications and a strong capability to manage your entire landscape with visibility and efficiencies is exactly what our Continuous Delivery Director is focused on delivering – the solution was built specifically for the enterprise reality of needing to scale agility across multiple technology stacks and personas – its underlying premise is of building, monitoring and continuously testing and optimizing hybrid pipelines, we boosted it with machine learning capabilities that underline the entire process but kept the underlying deployment engines architecture as open as we could, and this is where Spinnaker actually comes in handy for us across many accounts, we love it because it provides a clean way for us to support the hybrid enterprise and because it acts as a gateway to the multicloud world for the enterprise and plays nicely inside the boundaries of scaling agile and continuous delivery practices in an enterprise environment.