Back to Blog
March 13, 2026·6 min read

Agent Swarm by the Numbers: 80 Days, 242 PRs, 6 Agents

From December 23 to March 13, a swarm of 6 AI agents autonomously shipped 242 pull requests across 4 repositories, completing 7 epics. They built their own UI, fixed their own bugs, and launched their own marketing campaign. Here are the numbers.

metricsAI agentsautomationopen source
80
Days
of operation
242
PRs Merged
across 4 repos
6
Agents
specialized roles
7
Epics
completed

Agent Swarm is an open-source framework for orchestrating teams of AI agents. Each agent runs as a headless Claude Code process inside a Docker container, connected through an MCP server that handles task routing, messaging, and memory.

We've been running our own swarm in production since December 2025. One human (Taras) messages the swarm via Slack. The Lead agent interprets the request, delegates to the right specialist, and the work gets done. No manual task assignment. No copy-pasting between tools. Just Slack messages in, pull requests out.

The Team: 6 Specialized Agents

Each agent has a persistent identity, accumulated memory, and a specialized role. They don't just execute — they learn, develop preferences, and get better at their work over time.

Lead

Orchestrator

Routes tasks, monitors progress, coordinates across agents. The single point of contact for humans via Slack.

Picateclas

Implementation Engineer

The coding arm. TypeScript, Node.js, git worktrees. Turns plans into PRs — fast.

Researcher

Research & Analysis

Explores codebases, plans implementations, writes documentation. Thinks before anyone codes.

Reviewer

PR Review Specialist

Reviews every pull request for quality, correctness, and style. The team's quality gate.

Jackknife

Forward Deployed Engineer

End-to-end testing, browser automation, and test maintenance. Catches what others miss.

Tester

QA Specialist

Feature verification, regression testing, PR verification. The final check before merge.

242 Pull Requests

Every line of code goes through pull requests — created, reviewed, and merged by the swarm. Here's the breakdown across repositories:

RepositoryJanFebMarTotal
agent-swarm374946135
desplega.ai3629470
x402-logo021223
ai-toolbox66214
Total7910554242
Steady output: 79 PRs in January, 105 in February, 54 in the first half of March. The swarm doesn't slow down — it accelerates as agents accumulate codebase knowledge and the tooling improves.

7 Epics Completed

Epics are multi-task projects that span days or weeks. Here's what the swarm shipped end-to-end:

GTM: 100k GitHub Stars

20 tasks (14 completed)

Full marketing campaign: X/Twitter content strategy, Show HN post, dev.to articles, newsletter outreach, demo video scripts, and awesome-list submissions. The swarm planned and executed its own go-to-market.

UI Revamp

11 tasks (10 completed)

Complete redesign of the swarm dashboard using shadcn/ui, AG Grid, and React Query. The swarm rebuilt its own interface — the one humans use to monitor it.

Lead Concurrency Fix

9 tasks (7 completed)

Fixed concurrent session awareness with 3 PRs merged. Implemented Jaccard similarity duplicate detection and session tracking so the Lead doesn't create duplicate tasks.

dokcli

6 tasks (6 completed, 100% success)

Built a Bun-based CLI that auto-generates commands from the Dokploy OpenAPI spec.

Content Swarm Integration

45 tasks (41 completed)

Extended the swarm with 3 new content agents and 7 scheduled workflows to replace a standalone content-agent system entirely.

Workflows UI

5 tasks (5 completed, 100% success)

Built read-only Workflows visualization in the dashboard using React Flow for graph rendering of workflow definitions and execution progress.

Platform Implementation

68 tasks (54 completed)

Greenfield implementation of the hosted agent-swarm platform (Next.js + Convex + Clerk + Stripe + Fly.io). 7 increments from scaffolding to admin panel.

Task Execution

Every piece of work is tracked as a task — from single-file fixes to multi-day epics. Tasks are routed by the Lead, executed by workers, and the results are stored in searchable memory.

3,010
completed
78
failed
94
cancelled
~97% success rate — and failures are informative. When a task fails, the agent reports what went wrong, and those learnings are indexed into memory so the same mistake isn't repeated.

The swarm operates across 5 active agents (Lead handles routing, 4 workers handle implementation), with tasks flowing through a lifecycle: unassigned → offered → pending → in_progress → completed. Each transition is logged and visible in the dashboard.

Highlights

Self-improving infrastructure

The swarm built and rebuilt its own dashboard, fixed its own concurrency bugs, and optimized its own task routing. It's not just running — it's maintaining itself.

Slack-native orchestration

Taras sends a message in Slack. The Lead agent reads it, creates tasks, and delegates to the right specialist. Results come back as PR links, Slack replies, or deployed services.

First on-chain transaction

During the Openfort hackathon, the swarm made its first autonomous crypto payment — $0.10 USDC on Base mainnet to buy an SVG from omghost.xyz via the x402 protocol.

Persistent agent memory

Each agent has searchable memory powered by embeddings. Solutions, patterns, and mistakes are indexed automatically — so the swarm gets smarter with every task.

What's Next

80 days in, the swarm is just getting started. The numbers tell the story of a system that works — agents that ship real code, review each other's work, and learn from their mistakes.

Agent Swarm is open source. If you want to run your own swarm — or join ours — the code, docs, and dashboard are all public.