Workflows and agents in one binary.
A Go workflow runtime that runs n8n workflows 5–10× faster and orchestrates sandboxed AI agents — from a single 30 MB binary.
30+ integrations out of the box — plus anything with an HTTP API
Automation that ships as a binary, not a cluster.
m9m is a Go-native workflow runtime with first-class agent orchestration. It runs n8n workflows as-is, but it's built for a world where some of those nodes are GPT-4, Claude, or a sandboxed coding agent.
Zero infrastructure
One 30 MB binary. No Node.js runtime, no mandatory database, no container required. Cold start under 500 ms from download to executing workflows.
n8n-compatible
Import your existing n8n workflow JSON and run it unchanged. 32 built-in nodes, expression syntax ({{ $json.field }}), triggers, and webhooks all supported.
Agent-native
First-class CLI nodes run Claude Code, Codex, and Aider in sandboxed namespaces with resource limits. MCP integration gives LLMs 37 workflow-management tools.
Fast by default
Go runtime, no V8. Workflows execute 5–10× faster than Node-based alternatives. Process 500 items in ~6 seconds, run 500 concurrent workflows in ~150 MB of RAM.
Enterprise features, free
Git-based versioning, audit logs, multi-workspace support, Prometheus metrics, and OpenTelemetry tracing — included, not a paid tier.
Embeddable
Drop m9m into your Go, Python, or Node.js app with the SDKs. Build custom nodes in Go. Ship a workflow engine inside your product.
Install. Run. Done.
n8n workflows run unchanged. Agent workflows run sandboxed. Same binary, same CLI, same <500 ms cold start.
# 1. Install (30 MB single binary, no Node.js, no Python)
curl -fsSL https://raw.githubusercontent.com/neul-labs/m9m/main/install.sh | bash
# 2. Import and run an existing n8n workflow unchanged
m9m exec my-n8n-workflow.json --input '{"customer_id": 42}'
# 3. Or run a workflow that drives a sandboxed coding agent
cat <<'EOF' > review.json
{
"nodes": [
{ "id": "fetch", "type": "github.pr.get", "params": { "pr": "{{ $input.pr }}" } },
{ "id": "review", "type": "cli.claude-code", "params": {
"sandbox": true, "cpu": 2, "memory": "2Gi",
"prompt": "Review this diff for security issues: {{ $node.fetch.diff }}"
}},
{ "id": "post", "type": "github.pr.comment", "params": { "body": "{{ $node.review.output }}" } }
]
}
EOF
m9m exec review.json --input '{"pr": "neul-labs/m9m#42"}'
# Sub-second cold start → sandboxed agent → PR comment. One binary. A workflow runtime designed for agents from day one.
Most workflow tools grafted LLM nodes on top. m9m treats agent orchestration as first-class: CLI nodes spawn Claude Code, Codex, and Aider inside Linux namespaces with CPU, memory, and network limits. An MCP server exposes 37 workflow-management tools so an agent can read runs, edit workflows, and re-run jobs. Human-review steps, checkpoints, and audit logs are built in.
- Sandboxed CLI agent execution with namespace isolation
- MCP server exposing 37 tools for AI-driven workflow management
- Human-in-the-loop steps with resumable checkpoints
- Chain OpenAI, Anthropic, and local models in a single workflow
Performance snapshot
| m9m | Node-based | |
|---|---|---|
| Cold start | ~500 ms | ~3 s |
| Memory (idle) | ~150 MB | ~512 MB |
| Container size | 300 MB | 1.2 GB |
| Concurrent flows | 500 | 50 |
Source: neul-labs/m9m README. Run m9m benchmark on your own hardware.
Built-in nodes for the other 95% of what you need.
32 built-in node types cover the bulk of n8n's surface area. Write custom logic in JavaScript or Python when you need it. Ship new nodes in Go when you want native performance.
Data
PostgreSQL, MySQL, SQLite. Binary file I/O. HTTP Request. Webhooks. Google Sheets. CSV / JSON transforms.
Cloud & comms
AWS S3, GCP Cloud Storage, Azure Blob. Slack, Discord, SMTP email. GitHub and GitLab.
AI & control flow
OpenAI, Anthropic, sandboxed CLI agents. Switch, Filter, Set, Merge. Cron and webhook triggers. Sub-workflows.
Need help shipping agents or migrating off n8n?
Neul Labs — the team behind m9m — takes on a limited number of consulting engagements each quarter. We help teams migrate n8n workflows, build custom Go nodes, sandbox AI agents in production, and design automation platforms that don't collapse under load.