Back to Articles

Vibe Kanban: Orchestrating Multiple AI Coding Agents Like a Team of Junior Developers

[ View on GitHub ]

Vibe Kanban: Orchestrating Multiple AI Coding Agents Like a Team of Junior Developers

Hook

What happens when AI coding assistants get good enough that your bottleneck isn’t writing code, but managing five AI agents working on different features simultaneously? You need a kanban board for robots.

Context

The AI coding assistant landscape evolved rapidly from autocomplete suggestions to full-featured agents that can implement entire features. Tools like Claude Code, Gemini CLI, and Aider can take a natural language task description and produce working code, tests, and documentation. But a new problem emerged: these agents work best when focused on discrete, well-defined tasks, and running multiple agents in parallel on the same codebase quickly becomes chaotic.

Developers started hitting a workflow ceiling. You could run one agent at a time, waiting for it to finish before starting the next task, or you could manually manage multiple terminal windows with different git branches, hoping agents wouldn’t conflict. The authentication and configuration overhead multiplied—each agent needed its own MCP (Model Context Protocol) server setup. Vibe Kanban emerged as purpose-built infrastructure for this new paradigm: treating AI coding agents as parallelizable workers that need orchestration, isolation, and progress tracking. It’s not an AI assistant itself; it’s the project management layer that sits above your AI assistants.

Technical Insight

At its core, Vibe Kanban solves the parallel execution problem using git worktrees—separate working directories that share the same git repository. When you assign a task to an AI agent, the Rust backend spawns the agent as a child process within its own worktree. This means Agent A can work on feature-login while Agent B simultaneously works on refactor-api, each in isolated file system contexts that share git history but not working files.

The backend architecture uses Axum and Tokio for async request handling, with SQLite for task persistence. Here’s how the system spawns and tracks an agent:

// Frontend dispatches task to backend API
const assignTaskToAgent = async (taskId: string, agentType: 'claude' | 'gemini') => {
  const response = await fetch('/api/tasks/assign', {
    method: 'POST',
    body: JSON.stringify({
      task_id: taskId,
      agent: agentType,
      worktree_name: `task-${taskId}`
    })
  });
  return response.json();
};

// Backend (conceptual Rust flow):
// 1. Create git worktree: `git worktree add ../vibe-worktrees/task-123 -b task-123`
// 2. Spawn agent process in worktree directory
// 3. Stream stdout/stderr back to frontend via WebSocket
// 4. Update task status in SQLite based on exit code
// 5. On completion, create PR or merge, then remove worktree

The worktree lifecycle management is critical. Vibe Kanban maintains a registry of active worktrees and implements automatic cleanup for orphaned trees—a common problem when agents crash or tasks are cancelled. The system runs periodic reconciliation, comparing the SQLite task registry against actual worktrees on disk.

The centralized MCP configuration server is another architectural win. Model Context Protocol is the emerging standard for giving AI agents access to external tools (file systems, databases, APIs). Instead of configuring MCP servers separately for each agent instance, Vibe Kanban runs a single MCP server that all agents connect to:

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/workspace"]
    },
    "postgres": {
      "command": "docker",
      "args": ["exec", "db", "mcp-postgres-server"]
    }
  }
}

This configuration lives in the Vibe Kanban backend and gets injected into each agent’s environment. When you add a new MCP server capability, all agents immediately inherit it.

The SSH remote execution feature deserves attention. Many teams want to run AI agents on beefy cloud servers (AI agents benefit from faster file I/O and proximity to git remotes) while developers work locally. Vibe Kanban supports a reverse proxy deployment model:

# Server: Run Vibe Kanban on remote machine
ssh dev-server 'cd /workspace/project && vibe-kanban serve --host 0.0.0.0'

# Local: Forward port and access UI
ssh -L 3000:localhost:3000 dev-server
# Open http://localhost:3000 in browser

The frontend React application provides real-time task status using WebSocket subscriptions. As agents write to stdout (progress updates, tool calls, errors), the backend streams these logs to connected clients. The kanban board updates task cards from ‘In Progress’ to ‘Review’ when an agent pushes a branch and opens a PR.

One subtle but powerful pattern: the backend doesn’t try to interpret agent output semantically. It treats agents as black boxes that consume task descriptions and produce git commits. This keeps the orchestration layer agent-agnostic. Whether you’re using Claude Code, Gemini CLI, or a custom agent, as long as it can be invoked via CLI and produces git commits, Vibe Kanban can manage it.

Gotcha

Vibe Kanban explicitly doesn’t handle agent authentication or API key management. Before using the tool, you must have already configured and authenticated your AI agents externally. For Claude Code, this means having valid Anthropic API keys in your environment; for Gemini CLI, you need Google Cloud credentials. This design decision keeps Vibe Kanban focused on orchestration rather than becoming a credentials manager, but it does mean setup friction—especially in team environments where you need to ensure consistent agent configurations across machines.

The git worktree approach, while elegant, has edge cases. Repositories with submodules, git-lfs, or custom git hooks may behave unexpectedly when worktrees are created and destroyed programmatically. If your repository has post-checkout hooks that expect certain environment variables or directory structures, agents working in worktrees might fail silently. Additionally, the parallel agent model assumes tasks are reasonably independent. If you assign two agents tasks that both modify the same core module, you’ll still face merge conflicts—just deferred until PR review time instead of during development. Vibe Kanban doesn’t provide intelligent task dependency management; it’s your responsibility to assign conflict-free work. Finally, as a young project in a rapidly evolving space, the supported agent list is limited to the most popular CLI-based coding agents. If your workflow relies on IDE-specific AI assistants (like Cursor’s built-in agent), you can’t orchestrate them through Vibe Kanban.

Verdict

Use if: You’re actively working with multiple AI coding agents and finding yourself manually juggling branches and terminal windows; your team wants to centralize agent execution on remote servers with better hardware; you’re managing a codebase with enough parallelizable work that running 3-5 agents simultaneously provides real velocity gains; you need centralized MCP configuration for consistent agent capabilities. Skip if: You’re using a single AI assistant occasionally for autocomplete or small refactors; your workflow is deeply integrated with IDE-specific AI tools like Cursor that don’t expose CLI interfaces; your repository has complex git configurations (extensive submodules, hooks, LFS) that make worktree management risky; you prefer terminal-only workflows and find visual task boards unnecessary overhead. This tool shines when you’re treating AI agents as a scalable workforce rather than an occasional coding companion.