Kubernetes — CronJob

A Kubernetes CronJob that runs a report-generation script every day at 9 AM Eastern time, with a timezone-aware schedule, overlap prevention, bounded job history, and resource-constrained container.

🛠 Paste this YAML into the validator to check it instantly.

Open Validator →

Overview

A Kubernetes CronJob creates Job objects on a repeating schedule defined by a standard cron expression. Each Job, in turn, creates one or more Pods to run the actual workload. CronJobs are ideal for periodic tasks that are not part of the main application lifecycle: database backups, report generation, cache warming, data pipeline ingestion, and cleanup jobs.

Unlike a Deployment, a CronJob's Pods are meant to run to completion and then terminate. The container's restartPolicy must be OnFailure or NeverAlways (the default for Deployments) is not permitted inside a Job template, because it would cause the Pod to restart indefinitely even after a successful exit. OnFailure retries the container within the same Pod on failure; Never creates a new Pod on each failure attempt (up to spec.backoffLimit).

The timeZone field requires Kubernetes 1.27 or later. On older clusters, omit this field and note that the schedule runs in UTC. Use crontab.guru to build and verify cron expressions interactively.

Full YAML Copy-paste ready

cronjob.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
  name: daily-report
  namespace: default
spec:
  schedule: "0 9 * * *"
  timeZone: "America/New_York"
  concurrencyPolicy: Forbid
  successfulJobsHistoryLimit: 3
  failedJobsHistoryLimit: 1
  jobTemplate:
    spec:
      template:
        spec:
          restartPolicy: OnFailure
          containers:
            - name: report-generator
              image: ghcr.io/myorg/reporter:latest
              command: ["python", "generate_report.py"]
              env:
                - name: DATABASE_URL
                  valueFrom:
                    secretKeyRef:
                      name: my-app-secrets
                      key: database-url
              resources:
                requests:
                  cpu: 50m
                  memory: 64Mi
                limits:
                  cpu: 200m
                  memory: 256Mi

Key sections explained

Cron schedule syntax

The schedule field uses standard five-field cron syntax: minute hour day-of-month month day-of-week. The value "0 9 * * *" means "at minute 0 of hour 9, every day of every month, regardless of day of week" — i.e., daily at 9:00. The asterisk * is a wildcard meaning "every". Common examples: "*/5 * * * *" (every 5 minutes), "0 0 * * 0" (midnight every Sunday), "0 2 1 * *" (2 AM on the first of every month). The value must be quoted in YAML because bare cron strings starting with digits can be ambiguous to some parsers.

The timeZone field (Kubernetes 1.27+)

Before Kubernetes 1.27, CronJob schedules always ran in UTC regardless of what the cluster operator or workload developer intended. The timeZone field — added as stable in Kubernetes 1.27 — accepts any IANA timezone identifier (the same strings used in the tz database), such as America/New_York, Europe/London, or Asia/Tokyo. Kubernetes automatically handles daylight saving time transitions, so you do not need to manually adjust the schedule twice a year. If you omit timeZone, the schedule runs in UTC.

concurrencyPolicy: Forbid to prevent overlapping runs

The concurrencyPolicy field controls what happens when a new scheduled run is triggered but the previous Job has not yet finished. Forbid skips the new run entirely, ensuring that at most one instance of the job runs at a time — critical for tasks that are not safe to parallelize, such as database migrations or reports that write to the same output location. Allow (the default) permits multiple concurrent Jobs. Replace cancels the currently running Job and starts a new one. For most batch jobs, Forbid is the safest choice — you would rather miss a run than have two instances clobber each other's output.

Job history limits and pod cleanup

After a Job finishes (successfully or not), Kubernetes keeps the Job object and its associated Pods around for inspection — so you can run kubectl logs on a completed Pod to debug issues. Over time this accumulates. successfulJobsHistoryLimit: 3 retains the last 3 successful Job records (and their Pods), while failedJobsHistoryLimit: 1 retains only the most recent failure. Older records are automatically garbage-collected. Setting these limits is important in long-running clusters: without them, a daily CronJob will accumulate hundreds of Pod objects over the course of a year, wasting API server memory and making kubectl get pods noisy.

restartPolicy: OnFailure in Job templates

In a CronJob (and any Job), the Pod's restartPolicy must be set explicitly to either OnFailure or Never — the default Always used in Deployments is rejected. OnFailure means: if the container exits with a non-zero exit code, restart it within the same Pod. Kubernetes will retry up to spec.backoffLimit times (default 6) with exponential back-off before marking the Job as failed. Never never restarts; instead, a new Pod is created for each retry, which can be useful for jobs where you want to inspect the state of a failed Pod without it being overwritten by a restart.

Tips & variations

Trigger a CronJob manually for testing

You can create a one-off Job from an existing CronJob to test it without waiting for the schedule: kubectl create job --from=cronjob/daily-report daily-report-manual-test. This creates a Job object using the same template as the CronJob, letting you verify the logic immediately. Clean up with kubectl delete job daily-report-manual-test when done.

Add a deadline for missed schedules

If the CronJob controller is down (e.g., during a cluster upgrade) and misses several scheduled runs, you can control how many missed runs it will attempt to catch up on by setting startingDeadlineSeconds. Add this to the CronJob spec:

startingDeadlineSeconds snippet
spec:
  schedule: "0 9 * * *"
  startingDeadlineSeconds: 3600
  concurrencyPolicy: Forbid

With startingDeadlineSeconds: 3600, if the controller has been down for more than an hour, it will skip any missed runs rather than running them all at once when it recovers.

Monitor with Prometheus

The Kubernetes kube-state-metrics exporter exposes kube_cronjob_next_schedule_time and kube_job_status_succeeded metrics that you can use to alert when a CronJob has not run recently or when it has accumulated failures. See the for how to set up scraping for Kubernetes workloads.