Kubernetes ConfigMaps: Nuances to Know

Note : this is not a full-fledged guide article, but rather a reminder / hint for those who already use ConfigMap in Kubernetes or are just preparing their application to work in it.



Background: From rsync to ... Kubernetes


What happened before? In the era of “classical administration”, in the simplest version, the config file was placed right next to the applications (or in the repository, if you like). It's simple: we make elementary delivery (CD) for our code along with the config. Even a conditional rsync implementation can be called the rudiments of a CD.

When the infrastructure grew, different configs were required for different environments (dev / stage / production). The application was trained to understand which config to use, passing them as arguments to start or environment variables set in the environment. Even more CD is complicated with the advent of such useful Chef / Puppet / Ansible. Roles appear at servers, and environments cease to be described in different places - we come to IaC (Infrastructure as code).

What followed? If it was possible to see Kubernetes critical advantages for itself and even come to terms with the need to modify applications to work in this environment, migration occurred. On the way, I expected a lot of nuances and differences in the construction of the architecture, but when I managed to cope with the main part, I got the long-awaited application running in K8s.

Once here, we can still use the configs prepared in the repository next to the application or passing ENV to the container. However, in addition to these methods, ConfigMaps are also available . This K8s primitive allows you to use Go templates in configs, i.e. render them like HTML pages and reload the application when changing the config without restart. With ConfigMaps, there is no longer a need to keep 3+ configs for different environments and keep track of the relevance of each.

A general introduction to ConfigMaps can be found, for example, here . And in this article I will focus on some features of working with them.

Simple ConfigMaps


What did configs look like in Kubernetes? What did they get from the go templates? For example, here is an ordinary ConfigMap for an application deployed from a Helm chart:

apiVersion: v1
kind: ConfigMap
metadata:
  name: app
data:
  config.json: |
    {
      "welcome": {{ pluck .Values.global.env .Values.welcome | quote }},
      "name": {{ pluck .Values.global.env .Values.name | quote }}
    }

Here the values ​​substituted in .Values.welcomeand .Values.namewill be taken from the file values.yaml. Why exactly from values.yaml? How does the Go template engine work? We have already talked about these details in more detail here .

The call pluckhelps to select the necessary line from the map:

$ cat .helm/values.yaml 
welcome:
  production: "Hello"
  test: "Hey"
name:
  production: "Bob"
  test: "Mike"

Moreover, you can take both specific lines and entire fragments of the config.

For example, ConfigMap might be like this:

data:
  config.json: |
    {{ pluck .Values.global.env .Values.data | first | toJson | indent 4 }}

... and in values.yaml- the following contents:

data:
  production:
    welcome: "Hello"
    name: "Bob"

The one involved here global.envis the name of the environment. Substituting this value during the deployment, you can render ConfigMaps with different content. firstneeded here because pluckreturns a list, the first element of which contains the desired value.

When there are a lot of configs


One ConfigMap can contain several config files:

data:
  config.json: |
    {
      "welcome": {{ pluck .Values.global.env .Values.welcome | first | quote }},
      "name": {{ pluck .Values.global.env .Values.name | first | quote }}
    }
  database.yml: |
    host: 127.0.0.1
    db: app
    user: app
    password: app

You can even mount each config separately:

        volumeMounts:
        - name: app-conf
          mountPath: /app/configfiles/config.json
          subPath: config.json
        - name: app-conf
          mountPath: /app/configfiles/database.yml
          subPath: database.yml

... or pick up all the configs at once with the directory:

        volumeMounts:
        - name: app-conf
          mountPath: /app/configfiles

If you change the description of the Deployment resource during the deployment, Kubernetes will create a new ReplicaSet, decreasing the old one to 0 and increasing the new one to the specified number of replicas. (This is true for the deployment strategy RollingUpdate.)

Such actions will lead to the re-creation of the pod with a new description. For example: there was an image image:my-registry.example.com:v1, but became - image:my-registry.example.com:v2. And it doesn’t matter at all what exactly we changed in the description of our Deployment: the main thing is that this caused the replicaSet (and, as a result, the pod) to be recreated. In this case, the new version of the config file in the new version of the application is automatically mounted and there will be no problem.

ConfigMap Change Response


In case of changes in ConfigMap four event scenarios can follow. Consider them:

  1. : ConfigMap, subPath.
    : .
  2. : ConfigMap, pod.
    : pod .
  3. : ConfigMap Deployment -.
    : , ConfigMap’, Deployment, pod , — .
  4. : ConfigMap, .
    : pod’ / pod’.

We will analyze in more detail.

Scenario 1


We only corrected ConfigMap? The application will not restart. In the case of mounting by, subPaththere will be no changes until the pod is manually restarted.

Everything is simple: Kubernetes mounts our ConfigMap in a pod of a specific version of the resource. Since it is mounted with subPath, no additional "influence" on the config is no longer provided.

Scenario 2


Can't update the file without recreating the pod? Okay, we have 6 replicas in Deployment, so we can take turns manually doing everything delete pod. Then when creating new pods, they will “pick up” the new version of ConfigMap.

Scenario 3


Tired of performing such operations manually? A solution to this problem is described in Helm tips and tricks :

kind: Deployment
spec:
  template:
    metadata:
      annotations:
        checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
[...]

Thus, the spec.templatehash of the rendered annotation config is simply written into the pod ( ) template .

Annotations are arbitrary key-value fields in which you can store your values. If you register them in the template of the spec.templatefuture pod, these fields will fall into the ReplicaSet and the pod itself. Kubernetes will notice that the pod template has changed (since the sha256 config has changed) and will start RollingUpdate, in which nothing changes except this annotation.

As a result, we save the same version of the Deployment’s application and description and essentially just trigger the re-creation of the pod by the machine - similar to how we would do it manually through kubectl delete, but already “correctly”: automatically and with RollingUpdate.

Scenario 4


Perhaps the application already knows how to monitor changes in the config and automatically reload? Here lies an important feature of ConfigMaps ...

In Kubernetes, if the config is mounted with subPath, it will not be updated until the pod is restarted (see the first three scenarios discussed above) . But if you mount ConfigMap as a directory, without subPath, then inside the container there will be a directory with an updated config without restarting pod.

There are other features that are useful to remember:

  • Such an updated config file inside the container is updated with some delay. This is due to the fact that the file is not mounted exactly, but the Kubernetes object.
  • The file inside is a symlink. Example with subPath:

    $ kubectl -n production exec go-conf-example-6b4cb86569-22vqv -- ls -lha /app/configfiles 
    total 20K    
    drwxr-xr-x    1 root     root        4.0K Mar  3 19:34 .
    drwxr-xr-x    1 app      app         4.0K Mar  3 19:34 ..
    -rw-r--r--    1 root     root          42 Mar  3 19:34 config.json
    -rw-r--r--    1 root     root          47 Mar  3 19:34 database.yml

    And what will happen without subPathwhen mounted by the directory?

    $ kubectl -n production exec go-conf-example-67c768c6fc-ccpwl -- ls -lha /app/configfiles 
    total 12K    
    drwxrwxrwx    3 root     root        4.0K Mar  3 19:40 .
    drwxr-xr-x    1 app      app         4.0K Mar  3 19:34 ..
    drwxr-xr-x    2 root     root        4.0K Mar  3 19:40 ..2020_03_03_16_40_36.675612011
    lrwxrwxrwx    1 root     root          31 Mar  3 19:40 ..data -> ..2020_03_03_16_40_36.675612011
    lrwxrwxrwx    1 root     root          18 Mar  3 19:40 config.json -> ..data/config.json
    lrwxrwxrwx    1 root     root          19 Mar  3 19:40 database.yml -> ..data/database.yml

    Update the config (via deploy or kubectl edit), wait 2 minutes (apiserver caching time) - and voila:

    $ kubectl -n production exec go-conf-example-67c768c6fc-ccpwl -- ls -lha --color /app/configfiles 
    total 12K    
    drwxrwxrwx    3 root     root        4.0K Mar  3 19:44 .
    drwxr-xr-x    1 app      app         4.0K Mar  3 19:34 ..
    drwxr-xr-x    2 root     root        4.0K Mar  3 19:44 ..2020_03_03_16_44_38.763148336
    lrwxrwxrwx    1 root     root          31 Mar  3 19:44 ..data -> ..2020_03_03_16_44_38.763148336
    lrwxrwxrwx    1 root     root          18 Mar  3 19:40 config.json -> ..data/config.json
    lrwxrwxrwx    1 root     root          19 Mar  3 19:40 database.yml -> ..data/database.yml

    Note the changed timestamp in the directory created by Kubernetes.

Change tracking


And finally - a simple example of how you can monitor changes in the config.

We will use such a Go-application
package main

import (
	"encoding/json"
	"fmt"
	"log"
	"os"
	"time"

	"github.com/fsnotify/fsnotify"
)

// Config fo our application
type Config struct {
	Welcome string `json:"welcome"`
	Name    string `json:"name"`
}

var (
	globalConfig *Config
)

// LoadConfig - load our config!
func LoadConfig(path string) (*Config, error) {
	configFile, err := os.Open(path)

	if err != nil {
		return nil, fmt.Errorf("Unable to read configuration file %s", path)
	}

	config := new(Config)

	decoder := json.NewDecoder(configFile)
	err = decoder.Decode(&config)
	if err != nil {
		return nil, fmt.Errorf("Unable to parse configuration file %s", path)
	}

	return config, nil
}

// ConfigWatcher - watches config.json for changes
func ConfigWatcher() {
	watcher, err := fsnotify.NewWatcher()
	if err != nil {
		log.Fatal(err)
	}
	defer watcher.Close()

	done := make(chan bool)
	go func() {
		for {
			select {
			case event, ok := <-watcher.Events:
				if !ok {
					return
				}
				log.Println("event:", event)
				if event.Op&fsnotify.Write == fsnotify.Write {
					log.Println("modified file:", event.Name)
				}
				globalConfig, _ = LoadConfig("./configfiles/config.json")
				log.Println("config:", globalConfig)
			case err, ok := <-watcher.Errors:
				if !ok {
					return
				}
				log.Println("error:", err)
			}
		}
	}()

	err = watcher.Add("./configfiles/config.json")
	if err != nil {
		log.Fatal(err)
	}
	<-done
}

func main() {
	log.Println("Start")
	globalConfig, _ = LoadConfig("./configfiles/config.json")
	go ConfigWatcher()
	for {
		log.Println("config:", globalConfig)
		time.Sleep(30 * time.Second)
	}
}

... adding it with such a config:

$ cat configfiles/config.json 
{
  "welcome": "Hello",
  "name": "Alice"
}

If run, the log will be:

2020/03/03 22:18:22 config: &{Hello Alice}
2020/03/03 22:18:52 config: &{Hello Alice}

And now we will install this application in Kubernetes, having mounted the ConfigMap config in the pod instead of the file from the image. An example of a Helm chart has been prepared on GitHub :

helm install -n habr-configmap --namespace habr-configmap ./habr-configmap --set 'name.production=Alice' --set 'global.env=production'

And change only ConfigMap:

-  production: "Alice"
+  production: "Bob"

Update the Helm chart in the cluster, for example, like this:

helm upgrade habr-configmap ./habr-configmap --set 'name.production=Bob' --set 'global.env=production'

What will happen?

  • Applications v1 and v2 do not restart, because for them, no change has occurred in Deployment - they still welcome Alice.
  • The v3 application restarted, re-read the config, and greeted Bob.
  • The v4 application did not restart. Since ConfigMap is mounted as a directory, changes in the config were noticed and the config changed on the fly, without restarting the pod. Yes, the application noticed changes in our simple example - see the event message from fsnotify :

    2020/03/03 22:19:15 event: "configfiles/config.json": CHMOD
    2020/03/03 22:19:15 config: &{Hello Bob}
    2020/03/03 22:19:22 config: &{Hello Bob}

You can look at how a similar situation - tracking ConfigMap's changes - is being solved in a more adult (and just “real”) project, here .

Important! It is also useful to recall that all of the above in the article is also true for Secret'ov in Kubernetes ( kind: Secret): it is not for nothing that they are so similar to ConfigMap ...

Bonus! Third Party Solutions


If you are interested in the topic of tracking changes in configs, there are already ready-made utilities for this:

  • jimmidyson / configmap-reload - Sends an HTTP request if the file has changed. The developer also plans to teach SIGHUP to send, but the lack of commits from October 2019 leave these plans in question;
  • stakater / Reloader - monitors ConfigMap / Secrets and performs rolling upgrade (as its author calls it) on the resources associated with them.

It will be convenient to launch such applications with a sidecar-container to existing applications. However, if you know the features of Kubernetes / ConfigMap and configs to edit not “live” (through edit), but only as part of the deployment ... then the capabilities of such utilities may seem superfluous, i.e. duplicating basic functions.

Conclusion


With the advent of ConfigMap in Kubernetes, configs moved to the next round of development: using a template engine brought them flexibility comparable to rendering HTML pages. Fortunately, such complications did not replace the existing solutions, but became their complement . Therefore, for administrators (or rather, even developers) who consider new features redundant, good old files are still available.

For those who already use ConfigMap'ami or just looking at them, the article gives a brief overview of their essence and nuances of use. If you have your own tips & tricks on the topic - I will be glad to see in the comments.

PS


Read also in our blog:


All Articles