Simplify Kubernetes deployments with Helm (Part 3) – Creating configmaps and secrets

21Sep,17 Post Image

In Part 1 of this blog series we introduced you to Helm. Then, in Part 2, we saw how to specify parameters for a particular Helm release and showed examples that could be applied to other Kubernetes components like stateful and daemon sets. Earlier we created the files and templates for deployments and services that are crucial building blocks to deploy any application on Kubernetes.

But more often than not your application will also consist of configuration files and you will need to take care of passwords or other secrets in order to deploy a complete application. Kubernetes provides configmaps and secrets for these purposes, and the beauty of Helm Charts — as we have seen in the earlier parts of this series — is that it wraps all these components into one single package.

Configmaps

Before we go on and talk about how to create configmaps and secrets with Helm, I want to draw your attention to how Helm organizes its information. So just to recap, Helm has two parts: a client called Helm and a service named Tiller running on the Kubernetes cluster. Whenever we run the Helm install command, Tiller needs to store the state of the release of the application. Tiller uses configmaps to store information for each deployment. To find out what it actually stores, we can run the following command:

kubectl get configmaps joyous-pike.v1 -n kube-system -o yaml

apiVersion: v1
data:
  release: H4sIAAAAAAAC/+xazW………………………………………fzsAAA==
kind: ConfigMap
metadata:
  creationTimestamp: 2017-07-09T06:09:43Z
  labels:
    CREATED_AT: "1499580583"
    NAME: joyous-pike
    OWNER: TILLER
    STATUS: DEPLOYED
    VERSION: "1"
  name: joyous-pike.v1
  namespace: kube-system
  resourceVersion: "203267"
  selfLink: /api/v1/namespaces/kube-system/configmaps/joyous-pike.v1
  uid: 335e6703-646d-11e7-821b-08002744cb40

We redacted a lot of the information that is part of the data.release which contains a base-64 encoded, gzipped archive of the entire release record.

Configmaps themselves are stored in etcd. So it is advisable to remove Helm deployments when they are no longer needed. However, when you just run the command helm delete joyous-pike only the status is changed from DEPLOYED to DELETED To remove the Helm release completely, use the --purge option.

After this short excursion into the internals of Tiller, let’s go back to work on configmaps for our applications.

Usually, when we build our container images we won’t include the configuration files or make sure that the configurations can be adjusted during the time of deployment. Kubernetes added configmaps for this purpose. You can create a volume from configmaps and mount it into a pod. Configuration files can become complex rather fast and they can have many shapes and forms — json, xml, yaml and even proprietary formats. Examples for complex config files are nginx configs, logstash filters and java log4j files.

In Helm, configmap templates are treated just like the templates we learned about in Part 2 of our blog series and, as such, we can use all the functions we used there for configmaps, as well. Helm relies heavily on the go template language and makes use of functions provided by the Sprig template library. So there is plenty to choose from and we discuss some of the options here.

There is a few ways to create configmaps. One option is to have the templates rendered by another tool such as Ansible and Jinja2 and then have Helm convert them to configmaps. If that is the case, then you can use code similar to the following example:

apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ template "fullname" . }}
  labels:
    heritage: {{ .Release.Service }}
    release: {{ .Release.Name }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version }}
    app: {{ template "name" . }}
data:
  {{ (.Files.Glob "files/*").AsConfig | indent 2 }}

Here we grab all the files from the directory called files and store them without any changes in configmaps. This could also work if your config files are static.

But Helm provides other ways to create config files. Initially, the only option available was to create a single template and add all the necessary config files for your application pod into this one file:

apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ template "fullname" . }}
  labels:
    heritage: {{ .Release.Service }}
    release: {{ .Release.Name }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version }}
    app: {{ template "name" . }}
data:
  some-log4j2.xml: |-
    
    
     ...
     

  some-app.properties: |-
    ...
    tcp_delay_sec={{ .Values.some-app.tcpDelaySec }}

    num_of_servers={{ .Values.some-app.num_server }}

    server1_id={{ .Values.some-app.server1.id }}
    server1_ip={{ .Values.some-app.server1.ip }}
    server1_tcp_port={{ .Values.some-app.server1.tcpPort }}
    server1_udp_port={{ .Values.some-app.server1.udpPort }}
    ...
    ## CASSANDRA STUFF BELOW THIS LINE
    ...
    cassandra_nodes={{ join "," .Values.cassandraProperties.nodes  }}
    cassandra_ssl_enabled={{ .Values.cassandraProperties.sslEnabled | title }}
    ...

Here we combined the content of two templates — an xml and a text properties template — into one Helm configmaps file. We show only an abbreviated version of an actual Helm template we use in one of our projects. Notice that we use two sprig functions at the end of the example. The join function combines the values of a yaml list named cassandraProperties.nodes to a single string separating each value by a `,`. The title function helps us to make sure that the boolean variable is either a True or False and not a true or false, for example.

The above example works nicely if you have one or two smaller configuration files that you want to include into a configmap. But, rather quickly it can become complex and difficult to maintain and debug. Thankfully the committers at Helm recently added the “tpl” template function, which is now available with Helm version 2.5. With that you can rewrite the example above into something like:

apiVersion: v1
kind: ConfigMap
Metadata:
  name: {{ template "fullname" . }}-config
  labels:
    heritage: {{ .Release.Service }}
    release: {{ .Release.Name }}

data:
{{ (tpl (.Files.Glob "configs/*").AsConfig . ) | indent 2 }}

All your configuration templates will now reside in the configs directory, each in a separate file, which I think makes the maintenance of complex templates much easier. And, if you want to debug a single template, just replace the wildcard “*” with the fullname of the template and run helm template as described in Part 2 of our blog series.

Secrets

Kubernetes secrets are very similar to configmaps as far as Helm is concerned, except they are templates of the kind “Secret” and the data has to be encoded. We won’t go into too much detail on secrets management, but we may revisit this topic in a separate article. For now, we show two examples of secrets you may encounter when working with Kubernetes. In the first example we handle some certificates that our sample app requires for secure communication:

ApiVersion: v1
kind: Secret
metadata:
  name: {{ template "fullname" . }}-certificate
  labels:
    app: {{ template "fullname" . }}
    chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
    release: "{{ .Release.Name }}"
    heritage: "{{ .Release.Service }}"
type: Opaque
data:
  "certificate.pem": |-
    {{ .Files.Get "certs/certificate.pem" | b64enc }}
  "truststore.jks": |-
    {{ .Files.Get "certs/truststore.jks" | b64enc }}

Here we simply get the content of two files — a pem and a truststore file from the certs directory and encode it on the fly with the b64enc sprig function. Alternatively you could store the data already encoded in the template, like in the next example, where we create a Docker login secret of type dockerconfigjson. This type of secret is required when you store your container images in a private repository:

apiVersion: v1
kind: Secret
metadata:
  name: {{ template "config" . }}-dockerlogin
type: kubernetes.io/dockerconfigjson
data:
  .dockerconfigjson: ewogICAgICJhdXRocyI6IHsKICAgICAgICAgICJodHRwczovL2luZGV4LmRvY2tlci5pby92MS8iOiB7CiAgICAgICAgICAgICAgICJhdXRoIjogIlZlcnlWZXJ5U2VjcmV0IiwgCiAgICAgICAgICAgICAgICJlbWFpbCI6ICJzZWNyZXRAZXhhbXBsZS5jb20iCiAgICAgICAgICB9CiAgICAgfQp9Cg==

In this part we have seen how Helm can help us to create the configuration files for our Dockerized application. The configmaps and secrets are then mounted into the pods as part of Kubernetes deployment, for example. We also showed some of the very powerful functions that Helm provides — via the sprig library to some extent — to manipulate the data and to customize our Kubernetes templates to deploy our microservices.

Hopefully you learned a bit from this blog series. I encourage you to go back and review the examples from the earlier blogs, Part 1 and Part 2. We’ll be writing more about containerizing applications in future blogs. Comment below on topics you’d like to read more about.

Subscribe to Our Newsletter

Subscribe to Our Newsletter

Join our community of DevOps enthusiast - Get free tips, advice, and insights from our industry leading team of AWS experts.