Intelligence that compounds.
Every single day.
A lead coordinates all tasks, workers ship in their own containers, memory compounds with every run. Interact as you do with your remote colleagues.
Built for the boring
two-year horizon.
Most agent frameworks optimize for the demo.
We optimize for compounding, ownership, and the right to switch.
Yesterday’s work makes tomorrow easier
Most agents start every task from zero. Ours read what they shipped last week before they begin.
A peer, not a tab
@mention a high-agency teammate where you already work — Slack, GitHub, Linear, email. You hand off the goal, they ship the work in parallel while you do other things.
Your IP stays yours
Memory and identity files sit in your DB and filesystem. Audit, fork, walk away with the lot.
No lock-in, anywhere
BYOK, BYOM, swap Claude Code for Codex on Tuesday. Run the whole stack on your own infra under MIT, or skip the ops on Cloud. The memory layer is yours either way.
Day one: a team that
already ships.
A lead that decides what to ship next
The lead reads the task, breaks it down, and routes work to workers — directly assigned, offered for acceptance, or pulled from a pool. You hand off the goal; the lead owns the plan.
Every worker free to do its own job
Each worker runs in its own Docker container with a persistent /workspace. They install what they need, branch off main, and ship without stepping on each other or your repo.
Talks human, talks API
Mention the bot in Slack, assign a Linear issue, send an email — that’s the task. Built on Model Context Protocol with a full OpenAPI 3.1 spec at :3013, so anything that speaks HTTP — your CI, your monitoring, your custom dashboards — can drive the swarm or be driven by it.
Wake up to work already done
Cron-scheduled recurring jobs, multi-step workflows, role templates for Coder / Researcher / PM. Hand the night-shift work to scheduled tasks, find the results in your Slack the next morning.
From zero to compounding
in an afternoon.
No DAGs. No definition language.
Run docker compose up, then talk to it where you already work.
docker compose up
Clone, set CLAUDE_CODE_OAUTH_TOKEN in .env, bring it up. The API server, lead, and workers come online in their own containers — no DAG, no agent definition language.
Wire it into where you work
Install the GitHub App, drop the bot in Slack, OAuth Linear, point AgentMail at an inbox. From now on, mentions and assignments become tasks the lead routes.
It gets sharper while you sleep
Workers ship. Each task summary is embedded into shared memory. SOUL.md and IDENTITY.md evolve. Tomorrow’s swarm reads what last week’s shipped before it touches a keystroke.
Coordination,
end to end.
We run them. Devin, Claude Code, Codex, OpenCode — agent-swarm uses any of them as the brain inside a worker. You bring the model and the agent you trust; we coordinate a team of them in parallel, each in its own container, with shared memory and a lead that delegates. We sit one layer above coding agents — they ship the code, we run the team that ships the codebase.
Their bet is that you'll build your AI team on their stack — pre-built employees, their orchestration, their lock-in. Ours is the opposite: bring the agents and models you already trust, run them on infrastructure you control, keep your memory and identity files in your DB and filesystem as portable artifacts. The category bet is structurally different — AI work is coordinated across heterogeneous, owned components, not built top-down by a single vendor. You can fork the entire stack and keep going.
Inverse, actually. Glean, Onyx, and the rest of that shelf retrieve what your org said by indexing every app into one graph. agent-swarm coordinates what your org does — your data stays in its source of truth (Linear, GitHub, Notion); the swarm stores derived knowledge (decisions, gotchas, capability gaps, what worked last sprint). Two opposite jobs; happy to live next to a search layer.
You can. We did, and then kept going. LangGraph and AutoGen are SDKs — you write the graph, host the runtime, persist the memory, build the Slack and GitHub integrations, build the dashboard. agent-swarm is the running system: docker compose up, integrations pre-wired, memory built in, dashboard included. SDK if you're building a tool; agent-swarm if you want to use one.
As much as you need. You control the tools each worker can call, the integrations it can reach, and the boundaries it operates within — your infra, your policies. Hooks let you wire approvals, gates, or any custom check before an action takes effect. Whether the swarm is shipping code, drafting a campaign, running a UX research synthesis, or triaging support, the same governance surface applies: scoped permissions, reviewable outputs, and stop-buttons in the dashboard. The defaults are conservative; the surface area is yours to shape.
Yes. MIT-licensed source, runs anywhere Docker runs, BYOK / BYOM. The Cloud version exists for convenience, not as a hostage. Air-gapped customers run the same binaries.
Pay for the workers.
Not the seats.
Self-host the whole thing for free, forever. Or skip the ops and run it on Cloud — pick how many workers you need, see the total.
- Full source on GitHub (MIT)
- Run anywhere Docker runs
- BYO model keys, BYO models
- Air-gapped if you need it
- Community support on Discord
- Hosted lead + dashboard
- Coordination intelligence built in — memory persists across sessions
- Slack, GitHub, GitLab, Linear, AgentMail, Sentry
- Bring your own model keys (BYOK)
- 7-day free trial · no card required
- Single-tenant, VPC or on-prem
- SSO / SAML, audit log export
- Custom integrations & MCP servers
- Onboarding workshop for ICs + leads
- Priority response, dedicated channel
All Cloud plans include a 7-day free trial. Cancel from the dashboard at any time.
Build your swarm tonight.
A 7-day free trial on Cloud, or fork it on GitHub. Either way, your agents start compounding today.
