Article
I Was SSHing Into My Server to Deploy. Then I Fixed That.

It was the same routine every time. Write code, open terminal, SSH into the server, git pull, docker compose up, pray nothing breaks. If something did break at 2 AM, I'd be SSHing back in half asleep trying to figure out what went wrong.
That gets old fast.
So I built a proper CI/CD pipeline that handles all of it: security scanning, deployment, health checks, and automatic rollback; without me touching the server at all. Push code, pipeline takes over. If it works, it's live. If it doesn't, the old version stays up automatically (yeah fantastic thing; website never goes down, old version is there in the corner until new one works).
This is that exact pipeline, explained from scratch so you can build it for your own project regardless of your stack.
End Result
Before diving in, here's the end result in plain English:
You push code to your main branch. GitHub Actions spins up a temporary machine, scans your code for accidentally committed passwords or API keys, checks your Dockerfile for mistakes, then SSHs into your server and deploys the new code. After deployment, it hits your health check URL to confirm the app is actually running. If it fails the health check, it automatically rolls back to the previous working version and marks the pipeline as failed so you know something went wrong.
No manual steps. No SSH sessions. No hoping it works.
The Problem With Manual Deployments
Manual deployments aren't just tedious they're risky. Every time you SSH in and run commands by hand, you're one typo away from taking down your production environment. There's no history of what was deployed and when. There's no automatic safety net if something goes wrong. And if you're working with a team, nobody knows who deployed what.
A proper CI/CD pipeline solves all of this. Every deployment is recorded in your git history. The process is identical every time. And critically, it can detect failures and recover on its own.
Project Structure
For context, the application I'm deploying is a Django app running in Docker with PostgreSQL, Redis, Celery, and Nginx. But the pipeline approach here works for any stack (almost), just swap out the Docker commands for whatever you're using.
The key files we need:
your-project/
├── Dockerfile
├── docker-compose.yml # local development
├── docker-compose.server.yml # production server
└── .github/
└── workflows/
└── deploy.yml # the pipeline
Two Compose Files: Local vs Server
Most people running Docker on a server have a docker-compose.yml that looks something like this:
services:
web:
build:
context: .
volumes:
- .:/app # <-- this line
Your local docker-compose.yml probably mounts your code folder directly into the container with something like volumes: - .:/app. This is great for development; you edit a file and the change is immediately reflected without rebuilding.
But on a production server, It means Docker is running whatever files happen to be sitting in that folder. The image you built means nothing. The build step means nothing. You're just running files off disk the same way you would without Docker at all.
Your server compose file should have no code volume mount. The image builds with the code inside it at build time, and that's what runs. This is how Docker is supposed to work in production.
Here's a minimal server compose that illustrates the key differences:
# docker-compose.server.yml
services:
web:
build:
context: .
env_file:
- .env
environment:
- AUTO_MIGRATE=1 # run migrations on startup
volumes:
- media_data:/app/media
- static_data:/app/staticfiles
# no .:/app mount code lives in the image
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
restart: unless-stopped
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- static_data:/app/staticfiles:ro
depends_on:
- web
restart: unless-stopped
# db, redis, worker services...
Notice nginx has no profiles key it always starts. If you put nginx behind a profile in production and then wondered why you had to restart it manually every deploy, this is why.
The Health Check Endpoint
Before we look at the pipeline, you need a /health/ endpoint in your application. This is what the pipeline will hit after deployment to know whether the app is actually up and responding.
In Django, this is two lines:
# urls.py
from django.http import JsonResponse
def health_check(request):
return JsonResponse({"status": "ok"})
urlpatterns = [
path('health/', health_check),
# ... rest of your urls
]
For production you can make this more thorough: check the database connection, check Redis, check any external dependencies. But even a simple 200 response is enough for the pipeline to know the app started successfully.
The SSH Deploy Key
Here's an important thing. Don't use your .pem file as a GitHub secret. Again, don't do that. Generate a dedicated deploy key that's only used for this purpose. If it gets compromised, you revoke it without touching anything else.
# On your local machine
ssh-keygen -t ed25519 -C "github-actions-deploy" -f ~/.ssh/deploy_key
# No passphrase pipeline needs to use it non-interactively
This gives you two files. The private key goes to GitHub Secrets. The public key goes to your server's ~/.ssh/authorized_keys.
# Add public key to your server (run once)
ssh -i your-existing-key.pem user@your-server-ip \
"echo '$(cat ~/.ssh/deploy_key.pub)' >> ~/.ssh/authorized_keys"
GitHub Secrets Setup
Go to your repository → Settings → Secrets and variables → Actions and add these four secrets:
| Secret | Value |
|---|---|
EC2_SSH_PRIVATE_KEY |
Contents of your deploy_key private key file |
EC2_HOST |
Your server's IP address |
EC2_USER |
SSH username (ubuntu, ec2-user, etc.) |
APP_DIR |
Full path to your project on the server |
One practical tip: assign an Elastic IP to your EC2 instance if you haven't already. EC2 instances get a new public IP on every restart by default. An Elastic IP stays the same and is free while the instance is running.
The Pipeline
Now the main event. Here's the complete deploy.yml. I'll explain the decisions after:
name: Deploy to Production
on:
push:
branches:
- main
workflow_dispatch:
concurrency:
group: deploy
cancel-in-progress: false
jobs:
security-scan:
name: Security Scan
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Scan for secrets
uses: gitleaks/gitleaks-action@v2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Lint Dockerfile
uses: hadolint/hadolint-action@v3.1.0
with:
dockerfile: Dockerfile
failure-threshold: error
deploy:
name: Deploy
runs-on: ubuntu-latest
needs: security-scan
steps:
- uses: actions/checkout@v4
- name: Write SSH key
run: |
mkdir -p ~/.ssh
echo "${{ secrets.EC2_SSH_PRIVATE_KEY }}" > ~/.ssh/deploy_key
chmod 600 ~/.ssh/deploy_key
ssh-keyscan -H "${{ secrets.EC2_HOST }}" >> ~/.ssh/known_hosts
- name: Deploy
run: |
ssh -i ~/.ssh/deploy_key \
-o StrictHostKeyChecking=yes \
${{ secrets.EC2_USER }}@${{ secrets.EC2_HOST }} \
bash -s << ENDSSH
set -euo pipefail
cd "${{ secrets.APP_DIR }}"
git fetch origin main
PREV_COMMIT=\$(git rev-parse HEAD)
git reset --hard origin/main
docker compose -f docker-compose.server.yml up --build -d --remove-orphans
PASSED=false
for i in \$(seq 1 10); do
STATUS=\$(curl -sf http://localhost/health/ -o /dev/null -w "%{http_code}" 2>/dev/null || echo "000")
echo "Health check \$i/10 - HTTP \$STATUS"
if [ "\$STATUS" = "200" ]; then
PASSED=true
break
fi
sleep 6
done
if [ "\$PASSED" = false ]; then
echo "Health check failed - rolling back"
git reset --hard "\$PREV_COMMIT"
docker compose -f docker-compose.server.yml up --build -d --remove-orphans
exit 1
fi
docker image prune -f
ENDSSH
- name: Cleanup SSH key
if: always()
run: rm -f ~/.ssh/deploy_key
Why Each Decision Was Made
concurrency: cancel-in-progress: false If developers push two commits quickly and you'd have two deploys running at the same time on the same server. Docker rebuilding on top of itself. This prevents that. The second deploy waits, not cancels.
fetch-depth: 0 on the security scan Gitleaks scans your full git history, not just the latest files. A secret committed three months ago and then deleted from the file is still in your git history and still a liability. Full depth is the only way to catch it.
gitleaks Developers commit secrets by accident constantly. A .env file that slipped through, a hardcoded API key during a late-night debug session. Gitleaks catches it before the code ever reaches your server. If it finds something, the deploy never happens.
hadolint A Dockerfile linter. Catches real problems like installing packages without pinning versions (your build works today, breaks in three months when a new version releases), running as root when you shouldn't, unnecessary layers that bloat your image. failure-threshold: error means it won't block the pipeline over style suggestions; only actual problems.
set -euo pipefail Without this, a failed command in the deploy script gets silently ignored and the script keeps running. With it, any failure stops everything immediately. You want to know about failures, not have them hidden.
git reset --hard instead of git pull Pull can fail on merge conflicts or a dirty working tree. Reset is forceful. It always gets you to exactly the state of the remote branch, no negotiation.
PREV_COMMIT before pulling This is the rollback target. Saved before anything changes on the server. If the deployment breaks, this is what we go back to.
--remove-orphans If you rename or remove a service in your compose file, the old container keeps running without this flag. With it, Docker cleans up containers that no longer belong to the current compose configuration.
Health check loop Docker reporting that a container started is not the same as your application being ready. Migrations need to run, connections need to establish, startup code needs to execute. The loop gives the app up to 60 seconds (10 attempts, 6 seconds apart) to become healthy before declaring success or failure.
Automatic rollback If the health check times out, the pipeline goes back to PREV_COMMIT, rebuilds, and restarts. Your server goes back to the last working state without you doing anything. exit 1 marks the pipeline as failed so GitHub notifies you.
docker image prune -f Every build leaves behind an old image. They accumulate silently. On a small EC2 instance with a 20GB root volume, you can run out of disk faster than you'd expect. This cleans up images not currently in use after every successful deploy.
if: always() on key cleanup The SSH private key gets deleted from GitHub's runner whether the deploy succeeded, failed, or was cancelled. It never gets left on infrastructure you don't control.
What the Full Flow Looks Like
With everything set up, here's the exact sequence when you (developer) run git push origin main:
- GitHub detects the push and queues the pipeline
- A fresh Ubuntu machine starts up (not your server)
- Your code is downloaded onto that machine
- Gitleaks scans every file for secrets
- Hadolint checks your Dockerfile
- If both pass, a second machine starts up
- Your SSH key is written to that machine
- It connects to your server over SSH
- Your server pulls the latest code
- Docker rebuilds and restarts all containers
- The pipeline checks your health endpoint up to 10 times
- If healthy: old images are cleaned up, pipeline shows green
- If not healthy: server automatically reverts to previous version, pipeline shows red
The whole thing takes about 2-3 minutes from push to live, depending on your build time.
Taking It Further
This pipeline is intentionally lean — it does what you need without unnecessary complexity. But there are natural next steps when your project grows:
Add test execution between the security scan and deploy jobs. Run your test suite on every push and only deploy if tests pass.
Add Slack or email notifications using the slackapi/slack-github-action action. Getting a message when a production deployment fails is far better than finding out from a user.
Add a manual approval gate for production environments using GitHub's environment protection rules. Useful when you have a separate staging environment and want a human to sign off before production deploys.
Push images to a registry (ECR, Docker Hub, GitHub Container Registry) instead of building on the server. This decouples the build from the deployment, makes rollbacks faster, and saves build time on every deploy.
The Shift That Matters
The real value of this isn't saving a few minutes per deployment. It's the reliability and consistency. Every deployment follows the exact same process. There are no "I forgot to restart the worker" incidents. The health check means you know immediately if something broke. The rollback means your users don't stay impacted while you debug.
Once you deploy this way, going back to manual SSH deployments feels genuinely uncomfortable because it is. This is how production deployments should work.
The full pipeline is fewer than 60 lines of YAML. For what it gives you, that's one of the better returns on investment in the DevOps toolbox.
That's the whole thing. No more 2 AM SSH sessions, no more hoping the deploy worked, no more finding out something broke from a user.
If you're building serious infrastructure, the MongoDB replica set and SonarQube production setup posts follow the same approach. Real problems, real fixes. You can find them at All Articles.
Push code. Go touch grass. Your server knows what to do.


