Here, we’ll take a look at Kubernetes CronJobs and how to provision multiple of them dynamically using Helm.
Kubernetes documentation provides a good starting point for a single cron job to be configured, however the most common scenario is that you have many cron jobs that you want to easily manage, configure and provision all at once without having to create multiple charts for each.
Let’s take a look at how we can do that using Helm. For reference, here is the official Kubernetes documentation: https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/
We can use the template they provide as our starting point:
apiVersion: batch/v1 kind: CronJob metadata: name: hello spec: schedule: "* * * * *" jobTemplate: spec: template: spec: containers: - name: hello image: busybox:1.28 imagePullPolicy: IfNotPresent command: - /bin/sh - -c - date; echo Hello from the Kubernetes cluster restartPolicy: OnFailure
The end result for us will be similar, however we want our chart to dynamically loop through an array of cron jobs that we will define, and fill in the values for each corresponding job. To do this, let’s create a values.yaml
to store our cron job data.
cronJobs: - name: "cronjob-1" schedule: "0 * * * *" command: "echo 'This is cronjob 1'" suspend: false - name: "cronjob-2" schedule: "*/5 * * * *" command: "echo 'This is cronjob 2'" suspend: false - name: "cronjob-3" schedule: "*/15 * * * *" command: "echo 'This is cronjob 3'" suspend: false
The setup is very simple. We define a name, schedule and command for each individual cron job, and we can set the suspend
value for each which allows us to quickly toggle them on or off.
Next, we can go to our main chart as seen above – cronjobs.yaml
. We need to make some adjustments to implement our loop and to dynamically fetch the values we need. Let’s go line by line…
We first begin the loop:
{{- range .Values.cronJobs }} ---
Next, we leave a couple of the main fields as they were:
apiVersion: batch/v1 kind: CronJob
Then we define the metadata.name
. Here, when we reference .name
– we’re taking the name of the cron job from our values.yaml
array and cast it to YAML to avoid any formatting issues.
metadata: name: {{ toYaml .name }}
Now we can move onto the global spec. The same principle applies here, but you’ll also notice I’ve added suspend
as a new field:
spec: schedule: {{ toYaml .schedule }} suspend: {{ toYaml .suspend }}
Now we move onto the spec.jobTemplate
– it’s worth noting at this point that the spec above is for the CronJob itself. When a CronJob is created in K8s, it will wait until its schedule time, at which point it will automatically spin up a K8s Job. The job in turn creates a pod which executes the command. The pod and job then terminate once complete.
jobTemplate: spec: template: spec: containers: - name: {{ toYaml .name }} image: busybox:1.28 imagePullPolicy: IfNotPresent command: ["bin/sh", "-c", "{{ toYaml .command }}"] restartPolicy: OnFailure
In the above, we simply set the name of the container to the name of the corresponding cron job, and the command as the corresponding cron job command.
We then finish the loop with:
{{- end }}
You should also be aware that, if you need to access variables defined outside of your cron jobs values.yaml
file, you can do so by using $
to allow Helm to find the value outside of the loop.
For example, if you wanted to grab the image tag from outside of your .Values.cronJobs
array, you could do the following:
containers: image: busybox:{{ $.Values.image.tag }}
Check out the full cronjobs.yaml
:
{{- range .Values.cronJobs }} --- apiVersion: batch/v1 kind: CronJob metadata: name: {{ toYaml .name }} spec: schedule: {{ toYaml .schedule }} suspend: {{ toYaml .suspend }} jobTemplate: spec: template: spec: containers: - name: {{ toYaml .name }} image: busybox:{{ $.Values.image.tag }} imagePullPolicy: IfNotPresent command: ["bin/sh", "-c", "{{ toYaml .command }}"'"] restartPolicy: OnFailure {{- end }}
Oh, and one last heads up. If you’re running Hashicorp Vault, you might find that the pods for your CronJobs aren’t able to terminate. This is common if you’re running a vault-agent sidecar container. You can set the following within your chart to disable the sidecar. In this case, only the vault-agent-init
container will run. The pod will then be able to gracefully terminate.
vault.hashicorp.com/agent-pre-populate-only: "true"
vault.hashicorp.com/agent-pre-populate-only
– configures whether an init container is the only injected container. If true, no sidecar container will be injected at runtime of the pod. Enabling this option is recommended for workloads of typeCronJob
orJob
to ensure a clean pod termination.
Thanks for reading!
Leave a Reply