The $0 CI/CD Pipeline: GitHub Actions for Solo Projects
Every solo project I run has the same problem at some point: the deploy process is whatever I remember it being. SSH in, pull the latest code, restart the service, hope nothing broke. It works until it doesn't, and it doesn't at the worst possible time.
GitHub Actions fixes this for $0. Every repository gets 2,000 free CI/CD minutes per month on the hosted runners. That's enough to lint, test, and deploy every project a solo builder will realistically maintain.
The Minimum Viable Pipeline
The simplest useful pipeline does three things on every push to main: lint the code, run the tests, and deploy if both pass. Here's the exact workflow file for a Node.js project:
name: Deploy on Push
on:
push:
branches: [main]
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- run: npm ci
- run: npm run lint
- run: npm test
- name: Deploy
if: success()
run: |
ssh -o StrictHostKeyChecking=no ${{ secrets.DEPLOY_USER }}@${{ secrets.DEPLOY_HOST }} \
"cd /opt/myapp && git pull && npm ci --production && pm2 restart myapp"
env:
SSH_PRIVATE_KEY: ${{ secrets.SSH_PRIVATE_KEY }}
Drop that in .github/workflows/deploy.yml and you have a pipeline. Every push to main runs lint and tests. If both pass, it SSHs into your server and pulls the update. Total configuration time: about ten minutes, most of it spent adding your SSH key to GitHub Secrets.
The npm ci instead of npm install matters. It installs from the lockfile exactly, so your CI environment matches your local environment. One less source of "works on my machine."
Self-Hosted Runners: Unlimited Free Minutes
The 2,000-minute free tier is generous, but if you're running tests on every push across multiple repositories, it can get tight. Self-hosted runners solve this permanently.
A self-hosted runner is a small agent that runs on your own hardware. GitHub sends it work, it executes the jobs, and reports back. The minutes are free because you're providing the compute.
Setting one up on a Mac Mini takes about five minutes:
mkdir ~/actions-runner && cd ~/actions-runner
curl -o actions-runner.tar.gz -L \
https://github.com/actions/runner/releases/download/v2.321.0/actions-runner-osx-arm64-2.321.0.tar.gz
tar xzf actions-runner.tar.gz
./config.sh --url https://github.com/YOUR_USER --token YOUR_TOKEN
./svc.sh install
./svc.sh start
That last pair of commands installs a LaunchDaemon so the runner survives reboots. Once it's running, switch your workflow to use it:
jobs:
build-and-deploy:
runs-on: self-hosted
One line change. Everything else stays the same. Your tests now run on your Mac Mini instead of GitHub's servers, using your hardware, your network, and zero of your free minutes.
The practical benefit goes beyond cost. Self-hosted runners have access to your local network. That deploy step that SSHs into your server? If the runner is on the same machine or network, it's a localhost connection instead of a round trip through the public internet. Faster, simpler, no firewall rules to manage.
One thing to be aware of: self-hosted runners execute whatever code your workflow tells them to. On a public repository, that means anyone who opens a pull request can potentially run code on your machine. For private repos — which is what most solo builders are working with — this isn't a concern. For public repos, use GitHub's hosted runners for PR workflows and reserve your self-hosted runner for pushes to main, where you control what gets merged.
A reasonable setup for someone with a Mac Mini and three or four repositories: one self-hosted runner on the Mini handles all of them. GitHub queues jobs if two repos push at the same time. For solo work, the queue is almost never more than one deep. The runner uses minimal resources when idle — a few megabytes of RAM and effectively zero CPU.
Deploy on Merge Versus Manual Approval
There are two schools of thought on when deploys should happen, and the right answer depends on what you're deploying.
Deploy on merge means every push to main goes straight to production. The workflow above does exactly this. It's the right choice when:
- You're the only person pushing code
- You have tests that catch the things that matter
- The service can tolerate a few minutes of downtime if you need to roll back
- Speed of iteration matters more than ceremony
For a blog, a personal API, a side project with users who won't notice 30 seconds of downtime — deploy on merge. The feedback loop is tight: push, watch the action run, see it live in two minutes.
Manual approval adds a gate. The pipeline runs lint and tests automatically, but the deploy step waits for you to click a button in the GitHub UI. Here's how to add it:
name: Deploy with Approval
on:
push:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- run: npm ci
- run: npm run lint
- run: npm test
deploy:
needs: test
runs-on: self-hosted
environment: production
steps:
- name: Deploy
run: |
ssh deploy@server "cd /opt/myapp && git pull && npm ci --production && pm2 restart myapp"
The environment: production line is the key. In your repository settings under Environments, create a "production" environment and enable "Required reviewers." GitHub will pause the workflow after the test job passes and wait for your approval before deploying.
Use manual approval when:
- The service handles money or sensitive data
- Downtime is expensive or visible to clients
- You're deploying database migrations that can't be rolled back easily
- You want to batch several commits into one deploy
I use deploy-on-merge for content sites and internal tools. I use manual approval for anything client-facing or anything that touches a database schema. The pipeline is the same either way — the only difference is whether the deploy step auto-fires or waits.
There's a middle ground worth considering: deploy on merge during the day, require approval outside business hours. GitHub doesn't support time-based rules natively, but you can add a conditional to your workflow that checks the hour and skips the deploy step on nights and weekends. The idea is that a failed deploy at 3 PM is a mild inconvenience. A failed deploy at 3 AM, when you're asleep, is a service outage that lasts until morning.
The Pieces Worth Automating
Beyond the basic lint-test-deploy cycle, a few additions pay for themselves immediately.
Caching dependencies. Node modules, pip packages, Go modules — whatever your stack uses, cache it between runs. Without caching, every workflow run downloads everything from scratch. With it, the install step drops from minutes to seconds.
- uses: actions/cache@v4
with:
path: ~/.npm
key: ${{ runner.os }}-npm-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-npm-
Branch protection. In your repository settings, require the CI workflow to pass before any PR can merge. This turns your pipeline from a notification system into a gate. Broken code physically cannot reach main. Even when you're the only contributor, future-you at 2 AM will appreciate the guardrail.
Notifications on failure. A pipeline that fails silently is worse than no pipeline. GitHub sends email notifications by default, but a Slack webhook or a simple curl to your own notification endpoint is more reliable:
- name: Notify on failure
if: failure()
run: |
curl -X POST ${{ secrets.WEBHOOK_URL }} \
-H "Content-Type: application/json" \
-d '{"text":"Deploy failed: ${{ github.repository }} @ ${{ github.sha }}"}'
What Not to Build
The temptation with CI/CD is to keep adding stages until your pipeline looks like a Fortune 500 company's. Resist it.
You don't need separate staging and production environments if you're the only developer and your test suite is solid. You don't need canary deployments if your service handles 50 requests per minute. You don't need a dedicated infrastructure-as-code step if your server is a single Mac Mini running four containers.
Every stage you add is a stage that can break, a stage that consumes minutes, and a stage you have to debug at midnight when a deploy hangs. The right pipeline for a solo project is the smallest one that catches real problems. Lint catches syntax and style issues. Tests catch logic bugs. Deploy automation catches the "I forgot to restart the service" class of errors. That covers 95% of what goes wrong.
If you find yourself writing a pipeline that takes longer to run than the code change it's deploying, you've over-engineered it.
The Real Cost of Not Having a Pipeline
The $0 price tag makes it easy to procrastinate setting this up. The deploy script you run manually works fine. You'll add CI later, when the project is more mature.
But the cost of manual deploys isn't the time spent deploying. It's the time spent not deploying. When shipping a change requires SSH and three commands and a mental checklist, you unconsciously batch changes into bigger, riskier releases. You deploy less often. When you do deploy, more things can go wrong because more things changed. When something does go wrong, it's harder to identify which change caused it.
A pipeline that deploys on every push to main inverts this. Changes get small because there's no friction cost to shipping them. Small changes are easy to review, easy to roll back, easy to reason about. The deploy frequency goes up and the risk per deploy goes down.
This is the same principle that makes continuous integration work at companies with hundreds of engineers. It works even better for solo builders, because there's no coordination overhead. You push, it ships. The entire feedback loop is you and your pipeline.
Set up the minimum viable pipeline. Ten minutes of configuration buys you a deploy process that works the same way at 2 AM as it does at 2 PM, whether you're focused or exhausted or deploying from your phone. That consistency is worth more than any feature you'd build in the same ten minutes.