Helm — values.yaml Template

A comprehensive Helm chart values.yaml covering image configuration, ClusterIP Service, Nginx Ingress with TLS via cert-manager, Horizontal Pod Autoscaler, resource requests and limits, plain environment variables, and a pattern for managing secrets without committing them to source control.

🛠 Paste this YAML into the validator to check it instantly.

Open Validator →

Overview

Helm is the most widely used package manager for Kubernetes. A Helm chart packages Kubernetes manifests as Go templates, and values.yaml is the file that provides the default configuration for those templates. The templates reference values using Go template syntax — for example, {{ .Values.image.tag }} in a Deployment template becomes 1.2.3 when rendered with this values.yaml. This separation lets a single chart deploy consistently to development, staging, and production by simply overriding a few values, rather than maintaining separate manifest files per environment.

The structure of values.yaml is entirely up to the chart author — there is no enforced schema. However, the file shown here follows the conventions used by charts generated with helm create, which is a useful baseline that most Kubernetes operators are already familiar with. The top-level keys — replicaCount, image, service, ingress, resources, autoscaling — map to distinct sections of the rendered Kubernetes manifests.

Install a chart with this values file using: helm install my-release ./my-chart -f values.yaml. Upgrade a running release with: helm upgrade my-release ./my-chart -f values.yaml --set image.tag=1.3.0.

Full YAML Copy-paste ready

values.yaml
replicaCount: 3

image:
  repository: ghcr.io/myorg/my-app
  tag: "1.2.3"
  pullPolicy: IfNotPresent

service:
  type: ClusterIP
  port: 80
  targetPort: 3000

ingress:
  enabled: true
  className: nginx
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
  hosts:
    - host: example.com
      paths:
        - path: /
          pathType: Prefix
  tls:
    - secretName: my-app-tls
      hosts:
        - example.com

resources:
  requests:
    cpu: 100m
    memory: 128Mi
  limits:
    cpu: 500m
    memory: 512Mi

autoscaling:
  enabled: true
  minReplicas: 2
  maxReplicas: 10
  targetCPUUtilizationPercentage: 70

env:
  NODE_ENV: production
  LOG_LEVEL: info

secrets:
  DATABASE_URL: ""
  JWT_SECRET: ""

podAnnotations: {}
nodeSelector: {}
tolerations: []
affinity: {}

Key sections explained

values.yaml as input to Go templates

Every key in values.yaml becomes accessible inside Helm templates via the .Values object. For example, image.repository and image.tag are referenced in a Deployment template as {{ .Values.image.repository }}:{{ .Values.image.tag }}, which Helm renders to ghcr.io/myorg/my-app:1.2.3. The hierarchical structure of the YAML directly maps to nested dot notation in templates, so you can organize values logically without worrying about flat namespace collisions.

Charts generated by helm create also check values with conditionals — for example, the Ingress resource is only rendered if ingress.enabled is true. This pattern lets a single chart support both simple deployments (no Ingress) and fully featured ones (TLS, multiple hosts) just by toggling values, rather than by editing template files.

Overriding with helm install -f and --set

The values.yaml file holds sensible defaults. Per-environment overrides live in separate files — for example, values-production.yaml might set a higher replicaCount and different resource limits. Run helm install my-release ./my-chart -f values.yaml -f values-production.yaml and Helm merges the files in order (later files win). You can also override individual values on the command line with --set image.tag=2.0.0. This is especially useful in CI/CD pipelines where the image tag changes with every build.

Values are deeply merged, not replaced: if values-production.yaml only specifies replicaCount: 5, all other values from the base values.yaml are preserved. This makes it easy to maintain environment-specific overrides without duplicating the full values file.

image.tag pinning and pullPolicy

The image.tag value is quoted as "1.2.3" (with quotes) even though it looks like a number. This is because YAML would otherwise parse values like 1.2 as a floating-point number, which would be passed to Kubernetes as 1.2 rather than the string "1.2" — causing an invalid image reference. Always quote image tags in YAML to avoid this class of bug.

pullPolicy: IfNotPresent tells the kubelet to only pull the image if it is not already cached on the node. This is the right default for tagged images (where the content does not change for a given tag). Use Always only for mutable tags like latest — though it is better to avoid mutable tags in production entirely. See the Kubernetes Deployment example for more on image pinning.

autoscaling and HPA configuration

When autoscaling.enabled is true, the chart template creates a HorizontalPodAutoscaler (HPA) resource instead of a fixed replicaCount in the Deployment. The HPA watches CPU utilization and scales the Deployment between minReplicas and maxReplicas to keep average CPU at or below targetCPUUtilizationPercentage. In this example, the HPA will scale between 2 and 10 replicas to maintain 70% average CPU utilization. The replicaCount: 3 at the top of the file is used as the initial replica count when HPA is disabled; when HPA is enabled, the chart should ignore replicaCount and let the HPA manage scaling.

Leaving secrets empty in the committed file

The secrets block has empty string values for DATABASE_URL and JWT_SECRET. The committed values.yaml intentionally never contains real credentials. At deploy time, the actual values are injected in one of three ways: via a separate secrets.yaml values file that is never committed (and is loaded with -f secrets.yaml), via --set secrets.DATABASE_URL=... from a CI/CD secret store, or by creating Kubernetes Secrets separately and referencing them via valueFrom.secretKeyRef in the template instead of the secrets values map. All three approaches keep plaintext secrets out of version control.

Empty placeholder keys: podAnnotations, nodeSelector, tolerations, affinity

These keys exist in the default values to provide safe empty defaults for advanced scheduling features. podAnnotations (empty map {}) lets you add arbitrary metadata to pods — for example, Prometheus scrape annotations or Linkerd injection annotations. nodeSelector constrains pods to nodes with specific labels. tolerations (empty list []) allows pods to be scheduled on tainted nodes. affinity (empty map) enables soft or hard pod affinity and anti-affinity rules. By including them as empty defaults, users can override them without modifying the template, and templates can safely reference them without nil-pointer errors.

Tips & variations

Validate values.yaml with helm lint

Before deploying, run helm lint ./my-chart -f values.yaml to check for template rendering errors. For a dry-run that shows the fully rendered manifests, use helm template my-release ./my-chart -f values.yaml and pipe the output to kubectl apply --dry-run=client -f -.

Use a values schema for validation

Add a values.schema.json file to your chart to enforce types and required fields on values.yaml. Helm validates the supplied values against the schema before rendering, providing clear error messages for misconfigured values rather than cryptic template errors.

Per-environment values files

A common pattern for multi-environment deployments:

values-production.yaml
replicaCount: 5

resources:
  requests:
    cpu: 250m
    memory: 256Mi
  limits:
    cpu: 1000m
    memory: 1Gi

autoscaling:
  minReplicas: 3
  maxReplicas: 20

Deploy with: helm upgrade --install my-release ./my-chart -f values.yaml -f values-production.yaml --set image.tag=$IMAGE_TAG