A few months ago, we started to look at CD(Continuous Delivery) tools to replace a homemade CD tool. After going through a few of them, we ended up using Spinnaker because it’s feature-rich and did what we wanted and so much more.
Before we started this journey, we needed to know what we were looking for. We had a few requirements that the system needed to fulfill to be able to achieve the CD pipeline we wanted. Here are a few requirements we had.
- Render Helm charts
- Deploy to multiple Kubernetes clusters
The pipeline can take multiple sources
- Helm chart
- Docker image
- Helm value files from a different repo
Integration with other systems
- Docker registry
- GitHub hooks
- Control access to Spinnaker using OAuth 2.0/OIDC
- Open source
Although some of the requirements were nice-to-have, Spinnaker fulfilled all of them.
Spinnaker was initially developed by Netflix but picked up early and extended by Google. Currently, Spinnaker is maintained and improved by multiple companies, and the three biggest ones are Netflix, Google and Microsoft. It supports multiple cloud providers like AWS, GCP, Kubernetes, Azure and Oracle. Spinnaker is composed out of multiple independent microservices, eleven services to be exact and you can find information about spinnaker architecture here.
After reading about Spinnaker, the only downside I could see is that it can be complicated to set up. The good news is that now there is a package in Helm stable repository that simplifies the setup considerably.
Spinnaker has a configuration service called Halyard that manages the lifecycle of the Spinnaker microservices. Because we wanted to run the CD tool in Kubernetes, we tried both installing Spinnaker using the Halyard CLI and using the package from Helm stable repository.
We ended up using Helm because we are familiar with that package manager and the configuration is done via value file that we store in GitHub. By using Helm and value files, we automatically get the configuration history and, in our opinion that is critical.
We use an S3 bucket with versioning enabled as our backend that stores pipelines, pipelines-templates, applications and other information. The secrets needed for integration with other services are kept in AWS System Manager Parameter Store and transformed to Kubernetes secrets using Kubernetes-external-secrets created by GoDaddy.
Halyard has backup and restore functionality for configuration but that is not automatic. Halyard backup creates a tarball with the configuration that includes secrets, so we would have to keep this tarball very safe. Like I mentioned earlier, we keep Halyard config in Helm value file in GitHub and secrets in AWS SSM. I have not found any reason to take a backup of the Halyard configuration.
When deploying, we have three resources coming together - Helm chart, value file and a docker image. Because there is usually a difference in value files between environments, we have three value files, one for each environment, and we store them in a separate git repository. Before each environment, we need to render the Helm chart, image tag and value files together. The result of this bake is a base64 artifact containing the Kubernetes manifest for the application.
Spinnaker can monitor your docker registry for a new tag of a particular image. We use that functionality to start a pipeline and have a naming convention on tags. After configuring all the required artifacts, we have two stages that decide if this is a release, based on the docker tag. If this is not a release, this is either just a standard deployment, usually when changes are done to the main branch, or this is a feature deployment. The Helm chart can handle deploying a completely separate version of the application as a feature.
We create a release using GitHub releases and we have a GitHub webhook and a Drone build listening for tag event. The release build is relatively simple; we get the container tagged with the current git-sha of the code and re-tag it with the name of the release using the naming convention. We deploy this new release to a staging environment, and there is a manual approval if this should go to production, if so we bake for production and deploy the application. The same container is moved through all environments, build once, deploy many approach.
Image of Spinnaker pipeline
Spinnaker also has pipeline templates that one can create and manage using spin CLI. Templates can take configuration variables for example, what helm chart should be used, docker image and slack channel to receive notifications. From pipeline template you can create other pipelines but you can not modify pipelines created from a template. Not being able to change pipelines created from template has it’s pros and cons. Teams might wants to use the template but add some customization to the pipeline, that might be con for some. On the other hand if you make changes to the template, that change affect all pipelines and you have one single source of truth for all pipelines created from a template.
Because of how easily we can integrate Spinnaker with Slack, we configure the pipeline to send notification on build status and if a new release is ready for production. And we also integrate Spinnaker with DataDog and get an overview of build status and fail rates. This can be very beneficial if you start seeing errors, but you can also see that a new version was deployed.
Spinnaker is a very powerful CD tool with many integrations and deployment options that I would recommend trying out if you are looking for a new CD tool. Just be ready to spend some time configuring and setting it up. Like I mentioned earlier, it can be complicated, but worth it in the end.