m9m
Article

What is m9m? A zero-infra workflow runtime for the agent era

m9m is a single-binary Go workflow automation runtime with n8n compatibility and first-class AI agent orchestration. Here is what it is, why it exists, and where it fits.

Neul Labs ·
#m9m#workflow automation#agents#n8n

m9m is a single-binary, Go-native workflow automation runtime that runs your n8n workflows 5–10× faster and treats AI agent orchestration as a first-class capability. It ships as a ~30 MB binary with no Node runtime, no required database, and no container dependencies. Cold start is under 500 ms. It fits on a free-tier VPS and embeds inside other products via Go, Python, and Node.js SDKs.

This article is the long-form version of that sentence. It answers: what does m9m actually do, why does it exist, how does it compare to the other tools in the space, and when is it the right pick.

The one-paragraph answer

m9m executes workflows. A workflow is a directed graph of nodes — fetch an HTTP endpoint, run a SQL query, transform JSON, call an LLM, post to Slack. Nodes can be triggered by cron schedules, incoming webhooks, or direct CLI invocation. m9m reads the same JSON workflow format as n8n, supports the same expression syntax, and ships 32 of the most-used node types. In addition, it runs sandboxed CLI agent nodes (Claude Code, Codex, Aider) inside Linux namespaces, and exposes an MCP server with 37 tools so an external LLM can manage m9m itself.

What is m9m not

  • It is not a visual workflow editor with the UX polish of n8n. There is a web UI, but it is utilitarian. The authoring experience is JSON-first.
  • It is not a long-running durable-execution engine in the Temporal sense. Workflows checkpoint and resume after crashes, but if you need multi-month workflows with cross-region replication, Temporal is a better fit.
  • It is not an agent framework. You still write your agent in PydanticAI, LangGraph, CrewAI, or the Anthropic SDK. m9m is where that agent runs.

Why it exists

Two trends collided.

Workflow engines got slow. The Node.js-based generation of tools (n8n, Tines, Pipedream) ask for ~512 MB of memory per instance, 3-second cold starts, and container sizes north of 1 GB. That is fine at demo scale and punishing at platform scale. A team running a few hundred workflows on a $50/month VPS ends up with two choices: pay 10× for bigger infra, or wait for everything.

Agent infrastructure fragmented. Production agent systems need retries, timeouts, checkpoints, cost caps, sandboxing, audit logs — the same infrastructure workflow engines solved a decade ago. But the agent ecosystem grew up in Python, with LangGraph and custom orchestrators, disconnected from the n8n/Zapier world. Platform teams end up running two orchestration stacks for what is, structurally, the same problem.

m9m is the bet that these two stacks should be one, written in Go, shipped as a single binary, and deployable to a $10 box.

The numbers

From the project README, reproducible with m9m benchmark:

Metricm9mNode-based alternative
Cold start~500 ms~3 s
Memory (idle)~150 MB~512 MB
Container size300 MB1.2 GB
Concurrent workflows50050

The speedup on workflow execution is typically 5–10×. The compound effect of faster startup and higher concurrency is what makes m9m feasible on small infra.

How it runs your existing n8n workflows

m9m reads n8n’s workflow JSON schema directly. Node IDs, connections, trigger definitions, and parameters all map across. Expression syntax ({{ $json.field }}, {{ $node["HTTP Request"].data }}) evaluates with the same built-in function set.

What doesn’t cross over:

  • Community nodes from n8n’s registry — m9m ships 32 core node types; community nodes are either re-expressed as HTTP calls or ported as custom Go nodes.
  • Credential data — re-enter credentials on the m9m side. The formats (including OAuth2 flows) match.

In practice, for the teams we have migrated, roughly 80–90% of workflows cross over unchanged, 10% need a small patch, and a handful need a custom node. See the migration case study for the details.

How it runs agents

A CLI agent is a node type. You point it at Claude Code, Codex, or Aider, specify a sandbox (namespaces, CPU, memory, network), and supply a prompt. m9m spawns the agent in isolation, captures input/output, and returns artifacts. No host-level escape surface. No global state.

An MCP server ships with m9m. It exposes 37 tools — workflow.list, run.start, run.logs, workflow.update, credential.list, and so on — over Model Context Protocol. Any MCP-aware LLM can point at m9m and manage it: re-run a failed workflow, patch a retry policy, pull audit logs. Every action is audited; every workflow edit is versioned in Git.

See the agent core page for the full treatment.

Where m9m fits

Good fits:

  • Teams on n8n that want the same workflows at a tenth of the infrastructure cost.
  • Platform teams unifying business-ops, data, and agent automation on one runtime.
  • Product teams that need to embed a workflow engine inside their own product.
  • Any team shipping agent products that needs sandboxing, MCP, human review, and audit logs without gluing five tools together.

Worse fits:

  • Teams whose primary need is a polished drag-and-drop editor for non-technical users.
  • Teams with multi-month durable workflows that need strong cross-region consistency.
  • Pure research agent pipelines that don’t need a runtime — just a Python script.

FAQ

Is m9m a drop-in replacement for n8n? For the backend execution path, largely yes — 32 node types and the n8n expression engine cover the bulk of typical workflows. The editor experience is more minimal than n8n’s, so if your team heavily uses n8n’s visual editor, consider keeping n8n for authoring and using m9m for execution.

Is m9m open source? Yes, MIT-licensed. Source at github.com/neul-labs/m9m.

What language is m9m written in? Go 1.21+. Workflow execution is native Go; inline JavaScript and Python nodes are supported for custom logic.

What does m9m cost? Free. MIT license. No paid tier. Neul Labs sells consulting — design, migration, agent platform build-out, and SLA support — for teams that want it.

Need help shipping agents or migrating off n8n?

Neul Labs — the team behind m9m — takes on a limited number of consulting engagements each quarter. We help teams migrate n8n workflows, build custom Go nodes, sandbox AI agents in production, and design automation platforms that don't collapse under load.