Quick Deploy Guide
Complete step-by-step guide to deploy your Next.js app to AWS ECS Fargate
One Page, All Steps
Quick Navigation
Step 1: Creating an IAM User
Create an IAM user with programmatic access for deployments. Never use your root account.
Cost: IAM is completely FREE. No charges for users, roles, or policies.
- Go to AWS Console → Search IAM
- Click Users → Create user
- Name:
dev-user-fargate - Uncheck "Provide user access to the AWS Management Console"
- Permissions: Select Attach policies directly → Check AdministratorAccess
- Click Create user
- Click on the user → Security credentials tab → Create access key
- Choose Command Line Interface (CLI) → Check confirmation → Next
- CRITICAL: Download the .csv file immediately - you won't see the secret key again!
Protect Your Keys!
- Never commit access keys to Git
- Never share keys in chat, email, or tickets
- Rotate keys every 90 days
Step 2: AWS Console Setup
Set up the AWS services your app needs. All resources should be in the same region.
S3 Bucket (File Storage)
- AWS Console → Search S3 → Create bucket
- Bucket name:
my-app-uploads-2025(must be globally unique) - Region:
ap-southeast-1(or your preferred region) - Keep Block all public access enabled
- Click Create bucket
After creation, enable CORS for browser uploads:
[
{
"AllowedHeaders": ["*"],
"AllowedMethods": ["GET", "PUT", "POST", "DELETE"],
"AllowedOrigins": ["http://localhost:3000", "https://yourdomain.com"],
"ExposeHeaders": ["ETag"]
}
]Supabase Database
- Go to supabase.com and sign up
- Click New Project
- Choose a name and strong database password
- Select a region close to your deployment
- Go to Settings → Database → Copy Connection string
# Use pooler connection (port 6543) for your app
DATABASE_URL="postgresql://postgres.[project-ref]:[password]@aws-0-[region].pooler.supabase.com:6543/postgres?pgbouncer=true"SQS Queue (Optional - Message Queue)
- AWS Console → Search SQS → Create queue
- Type: Standard
- Queue name:
my-app-queue - Keep defaults → Create queue
- Copy the Queue URL
Secrets Manager (Optional)
- AWS Console → Search Secrets Manager → Store a new secret
- Secret type: Other type of secret
- Add key-value pairs (DATABASE_URL, API_KEY, etc.)
- Secret name:
my-app/production/secrets - Store
Step 3: AWS CLI Installation
Install using Homebrew:
For Apple Silicon Macs, also install Rosetta 2:
Configure Credentials
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: ap-southeast-1
Default output format [None]: jsonVerify Configuration
{
"UserId": "AIDAEXAMPLEUSERID",
"Account": "123456789012",
"Arn": "arn:aws:iam::123456789012:user/dev-user-fargate"
}Step 4: App Setup
Configure your Next.js app with the necessary dependencies and environment variables.
Install AWS SDK
Install Supabase Client
Create Environment File
# AWS Configuration
AWS_REGION=ap-southeast-1
AWS_ACCESS_KEY_ID=your_access_key_here
AWS_SECRET_ACCESS_KEY=your_secret_key_here
# S3 Configuration
AWS_S3_BUCKET_NAME=my-app-uploads-2025
# SQS Configuration (optional)
AWS_SQS_QUEUE_URL=https://sqs.ap-southeast-1.amazonaws.com/123456789/my-app-queue
# Supabase Configuration
NEXT_PUBLIC_SUPABASE_URL=https://your-project.supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=your-anon-key
DATABASE_URL=postgresql://postgres...Enable Standalone Output
Required for Docker deployment:
const nextConfig = {
output: 'standalone',
}
export default nextConfigCreate Health Check Endpoint
import { NextResponse } from "next/server"
export const dynamic = "force-dynamic"
export async function GET() {
return NextResponse.json({
status: "healthy",
timestamp: new Date().toISOString(),
})
}Step 5: Docker Installation
Download Docker Desktop:
Verify Installation
Docker version 24.0.7, build afdd53b
Create Dockerfile
# syntax=docker/dockerfile:1
# ============================================
# Stage 1: Dependencies
# ============================================
FROM node:20-alpine AS deps
RUN apk add --no-cache libc6-compat
WORKDIR /app
# Copy package files
COPY package.json package-lock.json* ./
# Install ALL dependencies (dev deps needed for build)
RUN npm ci && npm cache clean --force
# ============================================
# Stage 2: Builder
# ============================================
FROM node:20-alpine AS builder
WORKDIR /app
# Copy dependencies from deps stage
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Set environment variables for build
ENV NEXT_TELEMETRY_DISABLED=1
ENV NODE_ENV=production
# Build the application
RUN npm run build
# ============================================
# Stage 3: Runner (Production)
# ============================================
FROM node:20-alpine AS runner
WORKDIR /app
# Set environment variables
ENV NODE_ENV=production
ENV NEXT_TELEMETRY_DISABLED=1
# Create non-root user for security
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
# Copy necessary files from builder
COPY --from=builder /app/public ./public
COPY --from=builder /app/.next/standalone ./
COPY --from=builder /app/.next/static ./.next/static
# Set correct permissions
RUN chown -R nextjs:nodejs /app
# Switch to non-root user
USER nextjs
# Expose the port
EXPOSE 3000
# Set hostname to listen on all interfaces
ENV HOSTNAME="0.0.0.0"
ENV PORT=3000
# Health check
# HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 # CMD wget --no-verbose --tries=1 --spider http://localhost:3000/api/health || exit 1
# Start the application
CMD ["node", "server.js"]Create .dockerignore
# Dependencies
node_modules
.pnp
.pnp.js
# Testing
coverage
# Next.js
.next
out
# Production
build
# Misc
.DS_Store
*.pem
# Debug
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# Local env files
.env
.env.local
.env.development.local
.env.test.local
.env.production.local
# Vercel
.vercel
# TypeScript
*.tsbuildinfo
next-env.d.ts
# IDE
.idea
.vscode
*.swp
*.swo
# Git
.git
.gitignore
# Documentation
README.md
CHANGELOG.md
LICENSE
# Infrastructure (not needed in container)
infra/
# Terraform state (should never be in container)
*.tfstate
*.tfstate.*
.terraform/Test Locally
Step 6: AWS Copilot
AWS Copilot handles VPC, ECS cluster, load balancer, ECR, and IAM roles automatically.
Install Copilot
Verify Installation
copilot version: v1.32.0
Initialize Application
Application name: my-app Workload type: Load Balanced Web Service Service name: frontend Dockerfile: ./Dockerfile
Create Environment
Environment name: staging Credential source: [profile default] Default environment configuration? Yes
Deploy
Building your container image... Pushing to ECR... Creating CloudFormation stack... ✔ Deployed frontend to staging. URL: http://my-app-staging-123456.ap-southeast-1.elb.amazonaws.com
Cost Awareness
copilot app delete when done testing.Step 7: Environment Variables & Secrets
Configure environment variables in your Copilot manifest.
Update Manifest
# The manifest for the "frontend" service.
# Read the full specification for the "Load Balanced Web Service" type at:
# https://aws.github.io/copilot-cli/docs/manifest/lb-web-service/
# Your service name will be used in naming your resources like log groups, ECS services, etc.
name: frontend
type: Load Balanced Web Service
# Distribute traffic to your service.
http:
# Requests to this path will be forwarded to your service.
# To match all requests you can use the "/" path.
path: '/'
# Health check configuration
healthcheck:
path: '/api/health'
success_codes: '200'
healthy_threshold: 2
unhealthy_threshold: 3
interval: 30s
timeout: 10s
grace_period: 200s
# Configuration for your containers and service.
image:
# Docker build arguments. For additional overrides: https://aws.github.io/copilot-cli/docs/manifest/lb-web-service/#image-build
build: Dockerfile
# Port exposed through your container to route traffic to it.
port: 3000
cpu: 512 # Number of CPU units for the task.
memory: 1024 # Amount of memory in MiB used by the task.
platform: linux/x86_64 # See https://aws.github.io/copilot-cli/docs/manifest/lb-web-service/#platform
count: 1 # Number of tasks that should be running in your service.
exec: true # Enable running commands in your container.
network:
connect: false # Disabled for debugging - enable later for service-to-service communication
# storage:
# readonly_fs: true # Limit to read-only access to mounted root filesystems.
# Optional fields for more advanced use-cases.
#
variables:
HOSTNAME: "0.0.0.0"
PORT: "3000"
NODE_ENV: production
AWS_REGION: "ap-southeast-1"
AWS_S3_BUCKET_NAME: "your-bucket-name"
AWS_SQS_QUEUE_URL: "https://sqs.ap-southeast-1.amazonaws.com/123456789/your-queue"
NEXT_PUBLIC_SUPABASE_URL: "https://your-project.supabase.co"
NEXT_PUBLIC_SUPABASE_ANON_KEY: "your-anon-key"
# For POC only - use secrets for production
AWS_ACCESS_KEY_ID: "YOUR_ACCESS_KEY_ID"
AWS_SECRET_ACCESS_KEY: "YOUR_SECRET_ACCESS_KEY"
logging:
retention: 7Security Warning
copilot/frontend/manifest.yml to .gitignore when using plain text credentials. For production, use copilot secret init to store secrets in SSM Parameter Store.Production: Use SSM Secrets
Then reference in manifest:
secrets:
AWS_ACCESS_KEY_ID: /copilot/${COPILOT_APPLICATION_NAME}/${COPILOT_ENVIRONMENT_NAME}/secrets/AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: /copilot/${COPILOT_APPLICATION_NAME}/${COPILOT_ENVIRONMENT_NAME}/secrets/AWS_SECRET_ACCESS_KEYDeploy with Updated Config
Step 8: CI/CD Pipeline
Set up automated deployments when you push to GitHub.
Option A: Copilot Pipeline (Recommended)
Pipeline name: main-pipeline Repository: https://github.com/username/repo Branch: main
Commit and push the generated files:
Create GitHub connection in AWS Console:
- AWS Console → CodePipeline → Settings → Connections
- Click Create connection → Select GitHub
- Click Connect to GitHub → Install a new app
- Select your repository → Install & Authorize
- Status must change to Available
Deploy the pipeline:
Now every push to main triggers automatic deployment!
Option B: GitHub Actions
Create .github/workflows/deploy.yml:
name: Deploy to AWS
on:
push:
branches: [main]
env:
AWS_REGION: ap-southeast-1
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Install Copilot
run: |
curl -Lo copilot https://github.com/aws/copilot-cli/releases/latest/download/copilot-linux
chmod +x copilot && sudo mv copilot /usr/local/bin/
- name: Deploy
run: copilot deploy --name frontend --env stagingAdd secrets in GitHub: Settings → Secrets and variables → Actions
Bonus: Pipeline Notifications
Get notified when deployments succeed or fail via Email, Slack, or GitHub status.
Quick Email Setup (Manual)
Check your email and confirm the subscription.
Create EventBridge rule:
For full setup with Slack and GitHub status, see the Pipeline Notifications Guide.
Summary Checklist
| Step | Action | Verify |
|---|---|---|
| 1. IAM User | Create user with AdministratorAccess | Download .csv with access keys |
| 2. AWS Console | Create S3 bucket, Supabase project | Note bucket name, connection string |
| 3. AWS CLI | Install and configure | aws sts get-caller-identity |
| 4. App Setup | Install SDKs, create .env.local | npm run dev works |
| 5. Docker | Install, create Dockerfile | docker build succeeds |
| 6. Copilot | Init app, env, deploy | Get deployment URL |
| 7. Env & Secrets | Configure manifest | App connects to AWS services |
| 8. CI/CD | Set up pipeline | Push triggers deployment |
Useful Commands
copilot svc status- Check service healthcopilot svc logs- View application logscopilot svc exec- SSH into containercopilot app delete- Delete everything (careful!)
Next Steps
- Costs & Cleanup Guide - Understand billing and cleanup
- Pipeline Notifications - Email/Slack alerts
- S3 Deep Dive - Advanced S3 usage
- Supabase Guide - Auth, real-time, RLS