Why m9m exists
Most automation tools ask you to run a stack. m9m is a single 30 MB Go binary that runs your workflows — n8n JSON, scheduled jobs, HTTP pipelines, and AI agents — without a Node runtime, without a mandatory database, and without a container. Download it. Run it. You're done.
The problem we kept hitting
Workflow engines in the Node ecosystem are powerful and slow. A typical n8n install wants ~512 MB of memory, a database, and a container pipeline to deploy. Cold start is 3 s. Concurrency tops out around 50 workflows per process. None of that is a problem at demo scale — it becomes a problem the week you try to migrate 200 of those workflows onto a $10 VPS, or the month a customer asks you to embed the engine in an on-prem deployment.
Meanwhile, the agent side of the industry runs on its own stack: LangGraph, Temporal, custom Python orchestrators. Those are excellent at agent graphs, and uninterested in your n8n workflows. So you run two automation stacks: one for business flows, one for agents.
The m9m bet
m9m is a Go-native runtime that:
- Runs n8n workflow JSON unchanged — same node names, same expression syntax (
{{ $json.field }}), same triggers. Import and go. - Ships as one binary (~30 MB download, ~300 MB container) with no Node runtime and no required database.
- Treats agent orchestration as first-class: sandboxed CLI nodes for Claude Code / Codex / Aider, MCP tool exposure, checkpoints, human review steps.
- Keeps the enterprise features free: audit logs, Git-based versioning, multi-workspace support, Prometheus metrics.
The numbers
From the project README — reproducible with m9m benchmark:
- 500 ms cold start (vs ~3 s)
- ~150 MB memory at rest (vs ~512 MB)
- 300 MB container (vs ~1.2 GB)
- 500 concurrent workflows per process (vs ~50)
- 5–10× faster workflow execution on typical pipelines
When m9m is a good fit
- You're running n8n and you want the same workflows at a tenth of the infra cost.
- You need to embed a workflow engine inside your own product (Go/Python/Node SDKs).
- You're building agent products and want sandboxed CLI execution, MCP, and audit logs without gluing five tools together.
- You're a platform team supporting business ops + data + AI teams with one runtime.
When to look elsewhere
- You need a visual drag-and-drop editor with a mature UX right now — n8n's editor is still more polished for non-technical users.
- You need long-duration (weeks/months) durable workflows with strong consistency guarantees: Temporal is still the better tool at that size.
- Your agents are Python-first research pipelines — LangGraph or a custom runtime will feel more natural than m9m for pure experimentation.
Where to next
- Features — nodes, expressions, triggers, credentials, SDKs.
- Agent core — CLI nodes, MCP, sandboxing.
- m9m vs n8n, vs Temporal, vs LangGraph.