Yesterday was one of those magical days where everything just clicks. With Youtube – ❤️ Outdoor Boys – and then the NCAA College Football Championship between Miami and Indiana on in the background, I set out to reduce my cloud hosting costs by migrating Leaderboard Fantasy from Google Cloud Platform to a VPS provider. What I didn't expect was to also migrate all my Git repos from GitLab to GitHub, set up a complete CI/CD pipeline with GitHub Actions, and implement an automated backup strategy—all in one day!
Total time: about 7-8 hours. In real life, this would be a multi-week project involving careful planning, testing, and probably a few production incidents. Instead, I had Claude Code as my DevOps engineer, and we shipped the whole thing in a single day.
What made this possible wasn't just execution speed—it was meticulous planning. Before touching any production systems, Claude and I created detailed migration plans with phased rollouts, verification checklists, and rollback procedures. These documents became our playbook, and having them meant we could move fast without breaking things.
GCP is fantastic for scalability and managed services, but for a side project with modest traffic, I was paying more than necessary. A simple VPS with dedicated resources would cost a fraction of the price and give me more control.
The challenge? I had a full production stack to migrate:
And I wanted zero downtime and a seamless transition for my users.
Before this migration, the entire stack was running on Google Cloud Platform using various managed services:
It worked, but the monthly bill added up fast—especially Cloud Run's per-request pricing and the VPC connector costs. Plus, I had a working Docker setup on my home development server that I'd been using for testing. That existing Docker Compose configuration became the blueprint for the production migration.
The first step was making sure Claude could actually interact with all the systems involved. This meant setting up CLI authentication for multiple services.
# Install and authenticate
brew install gh
gh auth login
The GitHub CLI became our primary interface for creating repos, managing secrets, and working with pull requests. Claude was able to use it autonomously once authenticated.
# Install and authenticate
brew install glab
glab auth login
The GitLab CLI let us interact with our existing repos—cloning, checking history, and eventually archiving the CI/CD configurations.
For VPS access, we set up SSH key-based authentication:
# Generate a dedicated deployment key
ssh-keygen -t ed25519 -C "deploy@leaderboard" -f ~/.ssh/vpshosting_deploy
# Copy to server
ssh-copy-id -i ~/.ssh/vpshosting_deploy.pub king@your-vps-host
Once SSH was configured, Claude could execute commands on the remote server, check service status, and deploy updates—all from the CLI.
We started by creating a dedicated lfs-infrastructure repository to hold all deployment configurations:
gh repo create jking-ai/lfs-infrastructure --private --description "Infrastructure configs for Leaderboard Fantasy"
This repo became the single source of truth for:
The entire application runs as a set of Docker containers orchestrated by Docker Compose:
services:
leaderboard-web:
image: us-central1-docker.pkg.dev/leaderboard-fantasy/lfs-containers/leaderboard-web:latest
depends_on:
- lfs-data
- redis
lfs-data:
image: us-central1-docker.pkg.dev/leaderboard-fantasy/lfs-containers/lfs-data:latest
depends_on:
- mongodb
mongodb:
image: mongo:7.0
volumes:
- mongodb_data:/data/db
nginx:
image: us-central1-docker.pkg.dev/leaderboard-fantasy/lfs-containers/leaderboard-nginx:latest
cloudflared:
image: cloudflare/cloudflared:latest
command: tunnel run
Using Cloudflare Tunnel for ingress meant no exposed ports and automatic SSL—a nice security win.
This was the piece that made everything else possible. Rather than exposing my VPS directly to the internet with open ports, I decided to put everything behind Cloudflare—their free tier includes DDoS protection, SSL termination, and most importantly, Cloudflare Tunnel.
Internet → Cloudflare Edge (SSL/DDoS) → Tunnel → nginx:80 → App Containers
Traffic never hits my server directly. Instead, Cloudflare's edge network handles SSL termination and security, then routes requests through an encrypted tunnel to my nginx container. The VPS firewall can block all incoming traffic except SSH—no port 80 or 443 exposed.
First, we moved DNS management to Cloudflare:
# Verify nameserver propagation
dig NS leaderboardfantasy.com +short
# Should show: ada.ns.cloudflare.com, bob.ns.cloudflare.com (or similar)
This involved updating nameservers at my domain registrar and waiting for propagation. Claude helped me create a detailed checklist to track each step—we weren't about to fat-finger a DNS change on a production domain.
In the Cloudflare Zero Trust Dashboard, we created a tunnel named leaderboard-vpshosting and configured the hostname mappings:
| Hostname | Target |
|---|---|
| leaderboardfantasy.com | http://nginx:80 |
| api.leaderboardfantasy.com | http://nginx:80 |
The tunnel token gets stored as an environment variable, and the cloudflared container handles the connection:
cloudflared:
image: cloudflare/cloudflared:latest
container_name: leaderboard-cloudflared
restart: unless-stopped
command: tunnel --no-autoupdate run --token ${CLOUDFLARE_TUNNEL_TOKEN}
networks:
- leaderboard-network
depends_on:
nginx:
condition: service_healthy
With the tunnel in place, we locked everything down:
X-Forwarded-For headersThe nginx config needed Cloudflare's IP ranges to properly identify real client IPs:
# Trust Cloudflare proxy headers
set_real_ip_from 173.245.48.0/20;
set_real_ip_from 103.21.244.0/22;
# ... (full list from cloudflare.com/ips)
real_ip_header CF-Connecting-IP;
Claude didn't just execute commands—it created comprehensive planning documents with phased rollouts, verification steps, and rollback procedures. Here's a snippet from our Cloudflare setup plan:
## Implementation Checklist
- [x] Phase A: DNS migrated to Cloudflare
- [x] Phase B: Tunnel created in Zero Trust dashboard
- [x] Phase C: docker-compose.yml updated with cloudflared service
- [x] Phase D: Deployed to server
- [x] Phase E: Verified HTTPS working
- [x] Phase F: Security hardening complete (port 80 closed)
Having these checklists meant we could verify each phase before moving to the next. When something went wrong (and things always go wrong), we knew exactly where we were and how to roll back.
Moving MongoDB from Atlas to a self-hosted container required a careful sync. We had an existing script from a previous migration that we adapted:
#!/bin/bash
# sync-from-atlas.sh - Migrate data from MongoDB Atlas to local container
ATLAS_URI="mongodb+srv://user:pass@cluster.mongodb.net/leaderboard-db"
LOCAL_CONTAINER="leaderboard-mongodb"
# Dump from Atlas
mongodump --uri="$ATLAS_URI" --archive=/tmp/atlas-backup.gz --gzip
# Restore to local container
docker cp /tmp/atlas-backup.gz $LOCAL_CONTAINER:/tmp/
docker exec $LOCAL_CONTAINER mongorestore --archive=/tmp/atlas-backup.gz --gzip --drop
The --drop flag ensures we get an exact copy. We ran this a few times during migration to keep data in sync until the final cutover.
This is where things got interesting. I wanted to move from GitLab CI to GitHub Actions, not just for the VPS deployment but as a complete platform shift.
Claude helped me design a reusable workflow that:
# .github/workflows/vpshosting-deploy.yml
name: Deploy to VPS
on:
workflow_call:
inputs:
deploy_mode:
type: string
default: 'full'
secrets:
VPS_SSH_KEY:
required: true
GCS_SERVICE_ACCOUNT_KEY:
required: false
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Setup SSH
run: |
mkdir -p ~/.ssh
echo "${{ secrets.VPS_SSH_KEY }}" | base64 -d > ~/.ssh/id_rsa
chmod 600 ~/.ssh/id_rsa
- name: Pull and deploy
run: |
ssh king@${{ secrets.VPS_HOST }} << 'EOF'
cd /opt/leaderboard/app
sudo docker compose pull
sudo docker compose up -d
EOF
The real magic was making this a reusable workflow. Both lfs-data and leaderboard-web repos can trigger deployments by calling this workflow with workflow_call.
Creating a release in GitHub now triggers the full deployment pipeline. Our workflow uses prefix-based tag matching—any tag starting with rel_ triggers a production deployment:
# Tag and release using our naming convention: rel_YYYYMMDD_HHMM
git tag -a rel_20260119_1830 -m "Production release"
git push origin rel_20260119_1830
# Or use the CLI
gh release create rel_20260119_1830 --title "Release 2026-01-19" --notes "VPS migration complete"
The workflow configuration matches on the prefix:
on:
push:
tags:
- 'rel_*'
The release workflow builds the image, pushes to Artifact Registry, and deploys to production—all automatically.
VPS providers often charge extra for managed backups, and I wanted more control anyway. We implemented a custom backup strategy using cron and Google Cloud Storage.
#!/bin/bash
# backup-mongodb.sh - Automated MongoDB backup to GCS
CONTAINER="leaderboard-mongodb"
BUCKET="gs://lfs-mongodb-backups"
TIMESTAMP=$(date -u +%Y-%m-%d-%H%M%S)
BACKUP_FILE="mongodb-backup-${TIMESTAMP}.gz"
# Dump database
docker exec $CONTAINER mongodump --db=leaderboard-db --archive --gzip > /tmp/$BACKUP_FILE
# Upload to GCS
gcloud storage cp /tmp/$BACKUP_FILE $BUCKET/
# Cleanup
rm /tmp/$BACKUP_FILE
echo "Backup completed: $BUCKET/$BACKUP_FILE"
To avoid accumulating backups forever, we set a 7-day retention policy:
{
"rule": [
{
"action": {"type": "Delete"},
"condition": {"age": 7}
}
]
}
Applied with:
gcloud storage buckets update gs://lfs-mongodb-backups --lifecycle-file=gcs-lifecycle.json
Backups run twice daily at 12 AM and 12 PM UTC:
0 0 * * * root /opt/leaderboard/scripts/backup-mongodb.sh >> /var/log/mongodb-backup.log 2>&1
0 12 * * * root /opt/leaderboard/scripts/backup-mongodb.sh >> /var/log/mongodb-backup.log 2>&1
If a backup fails, we send an alert via Resend:
send_failure_notification() {
curl -X POST "https://api.resend.com/emails" \
-H "Authorization: Bearer ${RESEND_API_KEY}" \
-H "Content-Type: application/json" \
-d "{
\"from\": \"Backups <backups@leaderboardfantasy.com>\",
\"to\": [\"admin@example.com\"],
\"subject\": \"MongoDB Backup Failed\",
\"html\": \"<p>Backup failed at $(date)</p>\"
}"
}
With the infrastructure running on GitHub Actions, it made sense to fully migrate the repos too. This involved:
jking-ai organization# For each repo
gh repo create jking-ai/lfs-data --private
git remote add github git@github.com:jking-ai/lfs-data.git
git push github main --tags
# Archive old GitLab CI files
mkdir -p archive/gitlab
mv .gitlab-ci.yml archive/gitlab/
git add . && git commit -m "chore: archive GitLab CI/CD files"
By end of day, all three repos were on GitHub with working CI/CD pipelines.
Here's what we accomplished in one day:
| Task | Status |
|---|---|
| VPS infrastructure setup | Complete |
| DNS migration to Cloudflare | Complete |
| Cloudflare Tunnel (zero exposed ports) | Running |
| Docker Compose stack | Running |
| MongoDB migration from Atlas | Complete |
| GitHub Actions CI/CD | 3 repos configured |
| Automated backups to GCS | Running twice daily |
| GitLab to GitHub migration | 3 repos moved |
| Zero-downtime deployment | Working |
| Documentation | Updated |
Total commits across all repos: 23 Production downtime: ~2 minutes (DNS propagation) Coffee consumed: Too much
A few things came together to make this day so productive:
1. Claude Code as a DevOps partner. I wasn't typing every command—I was describing what I wanted, and Claude was generating scripts, debugging issues, and executing commands. When something failed, we troubleshot together.
2. CLI tools everywhere. The GitHub CLI, GitLab CLI, and gcloud CLI meant Claude could interact with services directly. No clicking through web UIs, no copy-pasting tokens manually.
3. Existing scripts to build on. We weren't starting from zero. Adapting existing deployment scripts was faster than writing from scratch.
4. Standards documentation. My AGENTS.md files in each repo told Claude exactly how the projects were structured, what tools to use, and what patterns to follow.
5. Detailed planning documents. Before executing anything risky, Claude created comprehensive markdown files with phased rollouts, verification checklists, and rollback procedures. The MIGRATION-PLAN.md, CLOUDFLARE-TUNNEL-SETUP.md, and PRODUCTION-DOMAIN-MIGRATION.md files became our playbook—each with clear steps we could check off as we went.
Infrastructure as code is worth it. Having everything in version-controlled scripts meant we could iterate quickly. When something didn't work, we could tweak and retry without remembering what we clicked.
Test the backup restore process. We actually ran a restore to verify our backups work. Many backup strategies fail at restore time—don't skip this step.
Cloudflare Tunnel is underrated. Zero exposed ports, automatic SSL, and easy configuration. For a side project, it's perfect.
The AI advantage is in iteration speed. Claude didn't write perfect scripts on the first try. But we could iterate 10x faster than I could alone—try something, see the error, fix it, try again.
The migration is complete, but there's always more to optimize:
But those are opportunites for another day. For now, Leaderboard Fantasy is running smoothly on its new home, and I can focus on building features instead of managing infrastructure.
The Bottom Line
7-8 hours. Full infrastructure migration. Zero stress.
Not because I'm a DevOps expert (I'm not), but because I had an AI partner who could handle the details while I focused on the architecture and decisions.
If you're putting off infrastructure work because it seems daunting, give it a try with Claude Code. You might be surprised how much you can accomplish in a day.
–Jeremy
Thanks for reading! I'd love to hear your thoughts.
Have questions, feedback, or just want to say hello? I always enjoy connecting with readers.
Get in TouchPublished on January 20, 2026 in tech