Back to Articles

Maestro: Parallel AI Coding Sessions with Git Worktrees

[ View on GitHub ]

Maestro: Parallel AI Coding Sessions with Git Worktrees

Hook

What if you could run three AI coding assistants simultaneously, each working on different features in isolated branches, while watching them all execute in real-time through a single dashboard? That’s the promise of Maestro, a tool that treats AI agents like parallel workers in a conductor’s orchestra.

Context

AI coding assistants like Claude Code, GitHub Copilot, and Gemini CLI have transformed how developers write code, but they suffer from a fundamental constraint: they work serially. You give Claude a task, wait for it to complete, review the changes, then move to the next feature. For developers juggling multiple features or bug fixes, this becomes a bottleneck. You’re effectively paying for powerful AI models but using them one at a time, like having a multi-core CPU that only runs single-threaded applications.

Maestro emerged from this frustration. Rather than treating AI assistants as sequential tools, it orchestrates them as parallel workers. The key innovation isn’t just running multiple terminal sessions—that’s trivial with tmux. Instead, Maestro leverages git worktrees to give each AI session its own isolated working directory and branch, preventing merge conflicts while enabling true parallel development. Combined with the Model Context Protocol (MCP) for AI status reporting, it creates a real-time dashboard where you can monitor multiple AI agents working simultaneously, each in their own sandbox.

Technical Insight

Maestro’s architecture reveals several clever engineering decisions. Built on Tauri 2.0, it combines a Rust backend for performance-critical operations with a React/TypeScript frontend for the UI. This hybrid approach delivers native desktop performance without Electron’s memory overhead while maintaining the rapid iteration cycle of web development.

The core orchestration happens through the ProcessManager component, which spawns isolated shell processes for each AI session. Here’s the conceptual flow of how it creates a new session:

// Simplified version of session initialization
async function createSession(sessionId: string, taskDescription: string) {
  // Create isolated git worktree
  const worktreePath = `~/.maestro/worktrees/${sessionId}`;
  const branchName = `maestro/${sessionId}`;
  
  await execCommand(`git worktree add ${worktreePath} -b ${branchName}`);
  
  // Spawn terminal process in worktree directory
  const ptyProcess = spawn('bash', [], {
    cwd: worktreePath,
    env: {
      ...process.env,
      MAESTRO_SESSION_ID: sessionId,
      MCP_SERVER_URL: 'http://localhost:3000/mcp'
    }
  });
  
  // Attach xterm.js for terminal rendering
  const terminal = new Terminal();
  ptyProcess.onData(data => terminal.write(data));
  
  // Initialize AI assistant in this environment
  ptyProcess.write(`claude-code "${taskDescription}"\n`);
  
  return { sessionId, terminal, ptyProcess };
}

The git worktree strategy is the linchpin. Unlike traditional branching where all branches share the same working directory (forcing you to stash changes or commit before switching), worktrees create separate directories for each branch. When Maestro spins up three sessions, you might have:

~/.maestro/worktrees/
  session-abc123/  (branch: maestro/abc123)
  session-def456/  (branch: maestro/def456)
  session-ghi789/  (branch: maestro/ghi789)

Each AI assistant works in complete isolation, modifying files in its own directory without affecting the others. When a session completes, Maestro provides a visual git graph showing all branches, letting you review and merge them selectively.

The Model Context Protocol integration adds another layer of sophistication. MCP is an emerging standard for AI agents to communicate with external tools. Maestro embeds an MCP server that AI assistants can call to report their status:

// MCP server endpoint handling agent status updates
app.post('/mcp/status', async (req, res) => {
  const { sessionId, status, currentFile, progress } = req.body;
  
  // Update UI in real-time via WebSocket
  websocket.emit('session-update', {
    sessionId,
    status, // 'idle' | 'working' | 'error' | 'complete'
    currentFile,
    progress
  });
  
  res.json({ acknowledged: true });
});

This creates a feedback loop: the AI agent reports what it’s doing, and Maestro’s UI updates in real-time. You see which file each agent is editing, whether they’re idle or actively working, and error states—all without manually checking each terminal.

The plugin marketplace extends Maestro’s capabilities through three types of extensions: Skills (reusable prompt templates), Commands (custom shell scripts), and MCP servers (additional tool integrations). The implementation uses symlink management to keep plugins isolated but accessible across sessions. When you install a plugin, Maestro symlinks it into each worktree’s .maestro/plugins/ directory, making it available to all AI assistants without duplicating files.

One subtle but important detail: Maestro doesn’t bundle or manage the AI CLI tools themselves. It assumes you’ve already installed and configured Claude Code, Gemini CLI, or other assistants. This keeps the architecture clean—Maestro is purely an orchestration layer—but creates setup friction for new users.

Gotcha

Maestro’s git worktree approach has sharp edges. If your repository has shared build artifacts, database files, or node_modules directories, multiple worktrees can cause chaos. Each worktree gets its own working directory, but they share the same .git directory, so git-ignored files like node_modules won’t be duplicated. You might have three AI sessions all trying to run npm install simultaneously, fighting over the same dependency cache. The solution is usually symlinking or using language-specific worktree strategies, but Maestro doesn’t automate this.

Monorepos present another challenge. If your codebase has interdependent packages and one AI session modifies a shared utility, the other sessions won’t see those changes until they merge or rebase. You can end up with sessions working on stale assumptions about shared code, leading to integration conflicts later. The worktree isolation that prevents conflicts during development can paradoxically create more conflicts at merge time.

The cross-platform build story is rough around the edges. On Linux, you need Rust 1.78+, webkit2gtk, libsoup, and a laundry list of system libraries. The README provides platform-specific instructions, but troubleshooting dependency issues on different Linux distributions can consume hours. macOS and Windows builds are more straightforward, but the Tauri + Rust toolchain still requires non-trivial setup compared to Electron alternatives that ship pre-built binaries.

Verdict

Use if: You’re working on feature-rich development where parallel workflows would genuinely save time—think building three independent features, running different refactoring experiments, or testing multiple approaches to a problem. You’re already comfortable with git branching workflows and have AI CLI tools configured. You value native performance and don’t mind investing setup time for long-term productivity gains. Skip if: You’re primarily doing single-task development, haven’t set up AI CLI tools yet, or need something that works out-of-the-box. The overhead of managing parallel sessions and merging multiple branches only pays off when you’re genuinely bottlenecked by serial AI execution. For most developers doing incremental feature work, a single AI assistant in your IDE is simpler and sufficient.