Deploying m9m on CapRover
Step-by-step guide to deploying m9m on a CapRover host — captain-definition, Dockerfile, persistent volume, HTTPS, and backups.
CapRover is a self-hosted PaaS that makes deploying Docker apps roughly as easy as Heroku. This guide walks through deploying m9m — the binary, the web UI, and a small persistent state directory — on a CapRover host, with HTTPS, backups, and a sensible resource envelope.
If you are deploying m9m for the first time, this is the shortest serious path.
What you need
- A CapRover instance with a wildcard domain set up (e.g.
*.apps.example.com). - Root or SSH access to the CapRover host for a one-time volume setup.
- An existing m9m workflow JSON or the willingness to use
m9m demofor the first run.
Overview of the deploy
- One CapRover app:
m9m. - One persistent volume mounted at
/datafor workflow definitions, credentials (encrypted), audit logs, and run artifacts. - A small Dockerfile that pulls the official m9m binary and exposes port 8080.
- An environment variable for the encryption key; everything else defaults.
Step 1 — Write the Dockerfile
# syntax=docker/dockerfile:1.7
FROM alpine:3.20
ARG M9M_VERSION=latest
RUN apk add --no-cache curl ca-certificates && \
curl -fsSL "https://github.com/neul-labs/m9m/releases/${M9M_VERSION}/download/m9m-linux-amd64" \
-o /usr/local/bin/m9m && \
chmod +x /usr/local/bin/m9m
VOLUME ["/data"]
EXPOSE 8080
ENV M9M_DATA_DIR=/data \
M9M_LISTEN=0.0.0.0:8080
CMD ["m9m", "serve"]
Check the m9m releases page for the exact tag format and adjust M9M_VERSION accordingly.
Step 2 — captain-definition
Next to the Dockerfile, a captain-definition file:
{
"schemaVersion": 2,
"dockerfilePath": "./Dockerfile"
}
Step 3 — Create the CapRover app
From the CapRover dashboard:
- Apps → Create New App — name it
m9m, enable persistent data. - App Configs → Persistent Directories — add
/data→m9m-data. - App Configs → Environment Variables — set
M9M_ENCRYPTION_KEYto a 32-byte random value. Generate withopenssl rand -hex 32. - Deployment → Method 3 (tarball / git) — push your
Dockerfile+captain-definitiontarball or wire a git remote. On first deploy CapRover will build, pull the binary, and start the container. - HTTP Settings — set the domain (e.g.
m9m.apps.example.com), enable HTTPS, enable “force HTTPS.”
Step 4 — Verify
curl -I https://m9m.apps.example.com/
# HTTP/2 200
curl https://m9m.apps.example.com/healthz
# {"ok":true}
Log in at the HTTPS URL, create an initial admin user when prompted, and import your first workflow.
Step 5 — Back up /data
CapRover persistent directories live on the host under /captain/data/. For backups, a nightly cron on the host is the simplest thing that works:
# on the CapRover host, as root
cat >/etc/cron.daily/m9m-backup <<'EOF'
#!/bin/sh
set -eu
DEST=/var/backups/m9m
mkdir -p "$DEST"
tar -C /captain/data -czf "$DEST/m9m-data-$(date +%F).tgz" m9m-data
find "$DEST" -mtime +30 -delete
EOF
chmod +x /etc/cron.daily/m9m-backup
If the host’s filesystem is itself backed up (BorgBackup, Restic, rsnapshot), you can skip the cron and just include /captain/data/m9m-data in the existing job.
Step 6 — Observability
m9m exposes Prometheus metrics at /metrics. In CapRover, you can either:
- Scrape directly from an external Prometheus with HTTP auth in front (CapRover Basic Auth, or a reverse-proxy middleware).
- Sidecar — deploy a Prometheus + Grafana app on the same CapRover and add m9m to its scrape config.
OpenTelemetry tracing is enabled by setting OTEL_EXPORTER_OTLP_ENDPOINT to your collector. One-span-per-node out of the box.
Resource sizing
For a typical 50-workflow tenant:
- 2 vCPU, 2 GB RAM is comfortable headroom.
- 10 GB of
/datalasts ~6 months of audit logs at default settings. Turn on log rotation (M9M_AUDIT_RETENTION=30d) if storage is tight. - If you run sandboxed CLI agents heavily, add CPU — namespace isolation is cheap, agent inference is not.
Upgrading
- Update
M9M_VERSIONin the Dockerfile (or pin to a new tag). - Re-deploy from CapRover.
- Verify
/healthzand run one canary workflow. - Roll back if needed by redeploying the previous tag. Workflow JSON and credentials are forward/backward-compatible within the same major version.
Common problems
- Port 8080 collision with another CapRover app — change
M9M_LISTENto:8081and update the CapRover container port mapping. - Webhook workflows 404-ing — CapRover’s default
nginxconfig strips some headers; if you use webhook signature verification, make sureX-Hub-Signature-256(or equivalent) is preserved. Adjust the app’s custom nginx snippet. - Cold starts feel slow — CapRover restarts the container on deploys. Cold start should still be under 1 s; if it isn’t, check the
/datavolume for thousands of stale run directories and enable retention.
Related
Need help shipping agents or migrating off n8n?
Neul Labs — the team behind m9m — takes on a limited number of consulting engagements each quarter. We help teams migrate n8n workflows, build custom Go nodes, sandbox AI agents in production, and design automation platforms that don't collapse under load.