GitHub Actions — Deploy to AWS S3 + CloudFront

A complete deployment workflow for static sites: build with Node.js, sync the output to an S3 bucket, and immediately invalidate the CloudFront cache — all triggered automatically on push to main.

🛠 Paste this YAML into the validator to check it instantly.

Open Validator →

Overview

This workflow covers the full deployment pipeline for a statically generated site (React, Next.js, Vue, Hugo, etc.) hosted on S3 + CloudFront. It uses the official aws-actions/configure-aws-credentials action to authenticate securely using IAM credentials stored as GitHub secrets, then uses the AWS CLI (pre-installed on GitHub-hosted runners) to sync files and trigger a CDN cache invalidation.

The workflow uses a GitHub Environment named production, which lets you configure required reviewers, deployment protection rules, and environment-scoped secrets in the GitHub UI. This is an important safety mechanism for production deployments — it prevents a bad push from deploying without a review gate.

Before using this workflow, you need: (1) an S3 bucket configured for static website hosting, (2) a CloudFront distribution pointing to it, (3) an IAM user with S3 write and CloudFront invalidation permissions, and (4) those credentials stored as GitHub secrets named AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, S3_BUCKET, and CF_DISTRIBUTION_ID.

Full YAML Copy-paste ready

.github/workflows/deploy.yml
name: Deploy to AWS

on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    environment: production
    steps:
      - uses: actions/checkout@v4
      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: us-east-1
      - name: Build
        run: npm ci && npm run build
      - name: Sync to S3
        run: aws s3 sync ./dist s3://${{ secrets.S3_BUCKET }} --delete
      - name: Invalidate CloudFront
        run: |
          aws cloudfront create-invalidation \
            --distribution-id ${{ secrets.CF_DISTRIBUTION_ID }} \
            --paths "/*"

Key sections explained

environment: production

This single line connects the job to a GitHub Environment named production. Environments in GitHub Actions provide three important features:

  • Protection rules: Require one or more reviewers to approve the deployment before it proceeds.
  • Environment secrets: Secrets scoped only to this environment (overriding or supplementing repo-level secrets).
  • Deployment history: A separate view in the GitHub UI showing all past deployments to this environment with their status and commit SHA.

For production deployments, always use an environment with at least one required reviewer — it prevents an accidental push from immediately going live.

The AWS secrets pattern

Credentials are referenced via ${{ secrets.AWS_ACCESS_KEY_ID }} and ${{ secrets.AWS_SECRET_ACCESS_KEY }}. GitHub Secrets are stored encrypted and are never exposed in workflow logs — if a secret value accidentally appears in a log line, GitHub automatically redacts it with ***. The configure-aws-credentials action sets the AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_DEFAULT_REGION environment variables, which the AWS CLI automatically picks up in subsequent steps.

Use an IAM user with the minimum necessary permissions for this workflow: s3:PutObject, s3:DeleteObject, s3:ListBucket, and cloudfront:CreateInvalidation. Never use root credentials or an admin IAM user in CI pipelines. Consider using OIDC federation instead of long-lived access keys (see Tips below).

aws s3 sync with --delete

aws s3 sync ./dist s3://my-bucket copies all files from ./dist to the S3 bucket, skipping files that haven't changed (comparing by size and last-modified date). The --delete flag removes files from S3 that no longer exist in ./dist. This is essential for keeping the bucket in sync with your built output — without it, old files accumulate in the bucket and can be served by CloudFront indefinitely.

The bucket name is stored as a secret (secrets.S3_BUCKET) rather than hardcoded in the YAML. This makes the workflow reusable across environments (staging vs production) via environment-scoped secrets.

CloudFront invalidation

After syncing new files to S3, CloudFront may continue serving the old cached version until its TTL expires. The aws cloudfront create-invalidation --paths "/*" command forces all edges to fetch fresh content immediately. The "/*" path invalidates the entire distribution — if you want to invalidate only a subset of paths (e.g. "/index.html"), specify them explicitly to avoid unnecessary invalidation charges (AWS charges for invalidation requests beyond the free tier).

The multi-line run with a trailing backslash (\) is a shell line-continuation character — it splits a long command across multiple lines for readability without changing its behavior.

Tips & variations

Use OIDC instead of long-lived access keys

AWS supports OpenID Connect (OIDC) federation with GitHub Actions, allowing workflows to assume an IAM role without storing any long-lived credentials as secrets. This is the most secure approach:

OIDC-based authentication
    permissions:
      id-token: write
      contents: read
    steps:
      - uses: actions/checkout@v4
      - name: Configure AWS credentials (OIDC)
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: arn:aws:iam::123456789012:role/GitHubActionsRole
          aws-region: us-east-1

This requires setting up an IAM OIDC provider in your AWS account and creating a role with a trust policy for your specific GitHub repository.

Add a staging environment

Duplicate the job with environment: staging and use environment-scoped secrets for a separate staging S3 bucket and CloudFront distribution. Trigger staging on all branch pushes and production only on main. This gives you a preview of every change before it reaches users.

Set correct cache headers during sync

Use --cache-control with different values for hashed assets vs HTML files:

sync with cache headers
      - name: Sync assets (long cache)
        run: |
          aws s3 sync ./dist/assets s3://${{ secrets.S3_BUCKET }}/assets \
            --cache-control "max-age=31536000,immutable"
      - name: Sync HTML (no cache)
        run: |
          aws s3 sync ./dist s3://${{ secrets.S3_BUCKET }} \
            --exclude "assets/*" \
            --cache-control "no-cache, no-store" \
            --delete