m9m
Positioning

Why m9m exists

Most automation tools ask you to run a stack. m9m is a single 30 MB Go binary that runs your workflows — n8n JSON, scheduled jobs, HTTP pipelines, and AI agents — without a Node runtime, without a mandatory database, and without a container. Download it. Run it. You're done.

The problem we kept hitting

Workflow engines in the Node ecosystem are powerful and slow. A typical n8n install wants ~512 MB of memory, a database, and a container pipeline to deploy. Cold start is 3 s. Concurrency tops out around 50 workflows per process. None of that is a problem at demo scale — it becomes a problem the week you try to migrate 200 of those workflows onto a $10 VPS, or the month a customer asks you to embed the engine in an on-prem deployment.

Meanwhile, the agent side of the industry runs on its own stack: LangGraph, Temporal, custom Python orchestrators. Those are excellent at agent graphs, and uninterested in your n8n workflows. So you run two automation stacks: one for business flows, one for agents.

The m9m bet

m9m is a Go-native runtime that:

  1. Runs n8n workflow JSON unchanged — same node names, same expression syntax ({{ $json.field }}), same triggers. Import and go.
  2. Ships as one binary (~30 MB download, ~300 MB container) with no Node runtime and no required database.
  3. Treats agent orchestration as first-class: sandboxed CLI nodes for Claude Code / Codex / Aider, MCP tool exposure, checkpoints, human review steps.
  4. Keeps the enterprise features free: audit logs, Git-based versioning, multi-workspace support, Prometheus metrics.

The numbers

From the project README — reproducible with m9m benchmark:

When m9m is a good fit

When to look elsewhere

Where to next

Need help shipping agents or migrating off n8n?

Neul Labs — the team behind m9m — takes on a limited number of consulting engagements each quarter. We help teams migrate n8n workflows, build custom Go nodes, sandbox AI agents in production, and design automation platforms that don't collapse under load.