- read

How to Rename a Helm Release

Savithru Lokanath 712

Options

There were two theoretically possible solutions that would allow us to rename an existing release without causing any service disruption.

Let’s take a closer look at the approaches:

  1. Trick the datastore

The first option was to modify the datastore (ie. ConfigMap in Helm v2 and Secrets in Helm v3) that stores the resource manifest by replacing the existing (incorrect) release name string with the desired (correct) value. Helm v3 stores the resource manifest in a zipped, double base64 encoded secret in the namespace.

## GET RELEASE INFO$ kubectl get secret -n <NAMESPACE> sh.helm.release.v1.<RELEASE-NAME>.v1 -o json | jq -r ".data.release" | base64 -D | base64 -D | gzip > release.json## REPLACE RELEASE NAME WITH DESIRED NAME & ENCODEDATA=`cat release.json | gzip -c | base64 | base64`## PATCH THE RELEASE$ kubectl patch secret -n <NAMESPACE> sh.helm.release.v1.<RELEASE-NAME>.v1 --type='json' --p="[{\"op\":\"replace\",\"path\":\"/data/release\",\"value\":\"$DATA\"}]”

With this approach, on multiple attempts, we noticed that the decoding/encoding was off due to the escape characters, binary data, etc., and we couldn’t upgrade the release after changing the name; in another instance, we lost a release’s info and had to restore from backup. The unpredictable results did not inspire confidence, so we decided to drop this approach. We would re-evaluate this as a last resort approach.

2. Orphan & Adopt

The second approach that we experimented with was more deterministic and simple in a way; it didn’t need us to go through the complex process of modifying the datastore. Instead, we disconnect the Kubernetes resources (orphan) from the incorrectly named Helm release and later have the new Helm release with the correct name start managing these resources (adopt). Sounds simple right….voila!!!

Let’s walk through the steps with an example. Assume that we have an incorrectly named release called “world-hello.” We’ll have to rename this to something more meaningful, such as “hello-world.”

  • First things first, we use Helm release names in the labelSelectors to select what backend pods the Kubernetes service (kube-proxy) directs traffic to. Since we are renaming the release, the correctly named new release will be installed, and the Kubernetes service will immediately start proxying traffic to the new ReplicaSet pods while they are still booting.
    The service will be unavailable to our customers during this time. The application pods typically take about 20–30s to boot and we can’t afford to have a disruption this long. To prevent this, we decided to remove the release name from the labelSelectors fields in the service spec.
Fig1. Remove the release label from the service’s selector field
## REMOVE RELEASE LABEL$ git diff templates/service.yamlapp: {{ .Values.app.name }}
- release: {{ .Release.Name }}
  • Next, let us follow the official steps to migrate the release from Helm v2 to Helm v3 without correcting the name. Once done, issue an upgrade using the new client to validate that the resources are now managed by Helm v3.
    The upgrade step will also add the label app.kubernetes.io/managed-by=Helm to the resources managed by the release. Without this label on the resources, the release renaming will fail.
## MIGRATE RELEASE FROM HELM v2 TO HELM v3$ helm3 2to3 convert world-hello --release-versions-max 1 -n dev
2020/11/12 19:06:44 Release “world-hello” will be converted from Helm v2 to Helm v3.
2020/11/12 19:06:44 [Helm 3] Release “world-hello” will be created.
2020/11/12 19:06:46 [Helm 3] ReleaseVersion “world-hello.v1” will be created.
2020/11/12 19:06:47 [Helm 3] ReleaseVersion “world-hello.v1” created.
2020/11/12 19:06:47 [Helm 3] Release “world-hello” created.
2020/11/12 19:06:47 Release “world-hello” was converted successfully from Helm v2 to Helm v3.
2020/11/12 19:06:47 Note: The v2 release information still remains and should be removed to avoid conflicts with the migrated v3 release.
2020/11/12 19:06:47 v2 release information should only be removed using `helm 2to3` cleanup and when all releases have been migrated over
## LIST HELM v3 RELEASE$ helm3 ls -n dev
NAME NAMESPACE REVISION
world-hello dev 1
## UPGRADE HELM v3 RELEASE$ helm3 upgrade --install world-hello -n dev
Release “world-hello” has been upgraded. Happy Helming!
NAME: world-hello
LAST DEPLOYED: Thu Nov 12 20:06:02 2020
NAMESPACE: dev
STATUS: deployed
REVISION: 2
TEST SUITE: None
  • Now that we’ve validated that the resources can be managed by Helm v3, let’s begin the process of adopting the existing resources. We need to add two annotations and a label to all the resources that need to be adopted by the new (correctly named) Helm v3 release. These annotations will indicate to Helm v3 that the new release should now start managing these resources.

NOTE: Up to this point, the Kubernetes resources have been managed by the incorrectly named Helm release that we migrated from v2 to v3.

## LABELS TO BE ADDEDapp.kubernetes.io/managed-by=Helm## ANNOTATIONS TO BE ADDEDmeta.helm.sh/release-name=<NEW-RELEASE-NAME>
meta.helm.sh/release-namespace=<NAMESPACE>
## ADD RELEASE NAME ANNOTATION$ for i in deploy cm sa svc role rolebinding; do kubectl annotate -n dev $i hello-world meta.helm.sh/release-name=hello-world --overwrite; donedeployment.extensions/hello-world annotated
configmap/hello-world annotated
serviceaccount/hello-world annotated
service/hello-world annotated
role.rbac.authorization.k8s.io/hello-world annotated
rolebinding.rbac.authorization.k8s.io/hello-world annotated
## ADD RELEASE NAMESPACE ANNOTATION$ for i in deploy cm sa svc role rolebinding; do kubectl annotate -n dev $i hello-world meta.helm.sh/release-namespace=dev --overwrite; donedeployment.extensions/hello-world annotated
configmap/hello-world annotated
serviceaccount/hello-world annotated
service/hello-world annotated
role.rbac.authorization.k8s.io/hello-world annotated
rolebinding.rbac.authorization.k8s.io/hello-world annotated
  • Once the annotations and labels are added to the Kubernetes resources, install the release with the correct name to sign-off on the adoption process. Once the release is upgraded, all the resources are actively managed by the correctly named release “hello-world.”
  • Because we have rolling deployments, the ReplicaSet managed by the incorrectly named release will be orphaned and hence would need to be cleaned up manually.
## INSTALL HELM v3 RELEASE WITH CORRECT NAME$ helm3 install hello-world -n dev
Release “hello-world” does not exist. Installing it now.
NAME: hello-world
LAST DEPLOYED: Thu Nov 12 20:06:02 2020
NAMESPACE: dev
STATUS: deployed
REVISION: 1
TEST SUITE: None
## LIST HELM v3 RELEASE$ helm3 ls -n dev
NAME NAMESPACE REVISION
world-hello dev 2
hello-world dev 1
## LIST REPLICASET MANAGED BY INCORRECTLY NAMED RELEASE$ kubectl get rs -n dev -l release=world-hello
NAME DESIRED CURRENT READY AGE
hello-world-8c5959d67 2 2 2 30m
## LIST REPLICASET MANAGED BY CORRECTLY NAMED RELEASE$ kubectl get rs -n dev -l release=hello-world
NAME DESIRED CURRENT READY AGE
hello-world-7f88445494 2 2 2 2m
  • Since we also removed the release info from the service’s labelSelector, the traffic is proxied toReplicaSets (pods) managed by both the correctly named and incorrectly named releases, ie. “hello-world” and “world-hello.”
  • Now we can start cleaning up orphaned resources and the datastore containing the incorrectly named release.