Features
m9m ships with roughly 95% of n8n's backend feature surface, plus a first-class agent runtime, and a handful of things n8n doesn't have.
Workflow runtime
- 32 built-in node types — HTTP Request, Set, Switch, Merge, Filter, Item Lists, Sub-workflow, and 25 more.
- n8n-compatible workflow JSON — import and run your existing exports unchanged.
- Expression engine — full
{{ $json.field }}/{{ $node["name"].data }}syntax with the standard string / math / date / array built-ins. - Triggers — cron schedules, webhooks, manual CLI invocation.
- Custom logic — inline JavaScript and Python when a built-in node isn't enough.
Integrations
- Databases: PostgreSQL, MySQL, SQLite.
- Object storage: AWS S3, Google Cloud Storage, Azure Blob.
- AI: OpenAI (GPT-4 and beyond), Anthropic Claude, and sandboxed CLI agents (Claude Code, Codex, Aider).
- Communication: Slack, Discord, SMTP email.
- Code platforms: GitHub, GitLab.
- Productivity: Google Sheets.
- Universal: HTTP Request node + webhook receivers for any API.
Agent orchestration
- CLI nodes — spawn Claude Code, Codex, and Aider in sandboxed Linux namespaces with CPU, memory, and network limits.
- MCP integration — 37 tools exposed over Model Context Protocol for AI-driven workflow management.
- Human-in-the-loop — checkpoints that pause a workflow for review and resume on approval.
- Multi-model pipelines — chain OpenAI, Anthropic, local models, and human steps in a single workflow.
Operations
- Git-based workflow versioning — workflows are JSON; version them like code.
- Audit logs — every run, every node execution, every credential use.
- Multi-workspace — isolate workflows, credentials, and runs per team or tenant.
- Credential management — encrypted at rest, environment-variable substitution, OAuth2 flows.
- Queue backends — in-memory, Redis, or RabbitMQ.
Observability
- Prometheus metrics — per-node, per-workflow, per-queue.
- OpenTelemetry tracing — one span per node, propagated across sub-workflows.
- Structured logs — JSON-formatted, run-scoped.
Extensibility
- Custom nodes in Go — interface-driven, registered at compile or load time.
- SDKs — Go, Python, and Node.js for programmatic workflow execution.
- Plugin architecture — load nodes from external binaries or shared libraries.
Performance
- ~500 ms cold start, ~150 MB RAM, 300 MB container.
- 500 concurrent workflows per process.
- 5–10× faster execution than Node-based alternatives on typical pipelines.
- Run
m9m benchmarkto reproduce on your hardware.
Need help shipping agents or migrating off n8n?
Neul Labs — the team behind m9m — takes on a limited number of consulting engagements each quarter. We help teams migrate n8n workflows, build custom Go nodes, sandbox AI agents in production, and design automation platforms that don't collapse under load.