Claude Code
Anthropic ships Claude Code as a compiled CLI. No one was supposed to see the source. This site walks you through exactly how it is built so you can learn from one of the most sophisticated AI coding agents ever written.
Get the full Claude Code source from the GitHub repository. Browse it locally in your editor while you read through this guide since having the code on hand makes everything click faster.
Key Features
Claude Code isn't a wrapper around an LLM. It's a fully realized AI development environment. Here's what ships in the source:
| Feature | Description |
|---|---|
| Agentic Coding | Claude autonomously edits files, runs tests, and fixes bugs across multi-step tasks without hand-holding |
| 46+ Tools | Bash, file read/write/edit, web search/fetch, glob, grep, LSP, agents, MCP, and notebook editing |
| 80+ Commands | /commit, /review, /diff, /mcp, /skills, /session, /autofix-pr, /bughunter, and more |
| Plan Mode | Claude drafts a full plan for approval before executing any changes — EnterPlanModeTool |
| Multi-Agent | Spawns and orchestrates sub-agents (AgentTool) for parallel workloads, each with their own tool context |
| Remote Bridge | Connects CLI to remote execution environments over WebSocket via the bridge/ system |
| Skills System | Define custom slash commands in JavaScript; 19 bundled skills ship with the product |
| MCP Protocol | First-class support for Model Context Protocol servers — external tools appear as native tools |
| Vim Mode | Full vi-keybinding support in the terminal REPL (vim/) |
| Voice Mode | Feature-gated voice input/output support (voice/) |
| IDE Integration | Bidirectional sync with VS Code and JetBrains via LSP (server/, hooks/useIDEIntegration.tsx) |
| Git Worktrees | Git worktree isolation for parallel agent tasks — each agent works in its own branch |
| Cron Triggers | Schedule AI agents to run on intervals via ScheduleCronTool (AGENT_TRIGGERS flag) |
| Cost Tracking | Per-session token and dollar cost tracking with model pricing via cost-tracker.ts |
| Compaction | Automatic context window management for long sessions: AutoCompact, Microcompact, Snip strategies |
What is Claude Code?
Think of Claude Code less as a chatbot and more as a coworker who can actually touch your filesystem. It reads files, runs shell commands, browses the web, edits code, makes git commits, and orchestrates other AI agents directly from a single terminal session. The codebase behind it is a TypeScript monolith (~12,000 lines across 332+ modules) that runs on Bun and renders its UI using React. In a terminal. With hooks. Yes, really.
This is not an official Anthropic release. This documentation was generated by analyzing the distributed source. Code examples are illustrative of real patterns in the codebase.
Codebase Overview
The source is organized into a flat-ish top-level directory with clearly named subdirectories for each system. Here are the most important entry points and directories:
Technology Stack
| Layer | Technology | Why it's used |
|---|---|---|
| Language | TypeScript (strict) | Full type safety across 1,900+ files; discriminated union message types, Zod schemas |
| Runtime & Bundler | Node.js 18+ / Bun | Bun's feature() macro enables compile-time dead-code elimination for feature flags |
| Terminal UI | React + Ink | React component model for streaming terminal output; real-time diffs, progress spinners, dialogs |
| State | Custom Zustand-like store | Single AppState source of truth; synchronous immutable updates; no Redux overhead |
| API Client | @anthropic-ai/sdk | Official SDK with streaming, tool_use, beta feature support |
| Schema Validation | Zod | All tool inputs validated before execution; input schemas double as API tool schemas |
| CLI Framework | Commander.js | Argument parsing and sub-command routing for the CLI entry point |
| MCP | @modelcontextprotocol/sdk | Dynamic MCP server management, OAuth flows, tool discovery |
| Analytics | GrowthBook | Feature gating at runtime, A/B testing, experiment tracking |
| HTTP | Axios | Internal HTTP requests (bridge, remote sessions, MCP auth) |
| Linting | Biome | Fast Rust-based linter/formatter replacing ESLint + Prettier |
| Colors | Chalk | Terminal color output for non-Ink code paths |
How It Works: The Big Picture
Every time you type a message, this is what happens under the hood. It is not magic, but rather it is a well-structured loop that any developer can read and understand:
The interactive REPL path renders each step with a rich Ink UI: tool use messages appear as they stream, progress spinners run during long operations, and the user can interrupt at any time. The headless SDK path (via QueryEngine) exposes the same pipeline as an async generator yielding SDKMessage events, suitable for programmatic embedding in other tools.
Built-in Safety
Every tool call passes through a layered permission system before execution. In the default interactive mode, the user is prompted to approve dangerous operations. The permission system supports configurable allow/deny rules, auto-approval modes, and even a lightweight LLM-based classifier that can make real-time security decisions in automated workflows.
Explore the Documentation
Each section below covers one piece of the puzzle. Start with Architecture if you're new, or jump straight to Tools or Security if you know what you're looking for.
Feature Flags
Several features are gated by Bun's dead-code elimination via feature() calls in tools.ts and query.ts. Features with this flag are literally removed from the compiled binary when disabled — zero bytes, zero runtime cost.
| Flag | Feature | Key file |
|---|---|---|
KAIROS | KAIROS assistant mode — alternative interaction paradigm | assistant/index.ts |
VOICE_MODE | Voice input/output support | voice/ |
BRIDGE_MODE | Remote WebSocket bridge for cloud-hosted execution | bridge/bridgeMain.ts |
DAEMON | Background daemon process for persistent sessions | entrypoints/ |
COORDINATOR_MODE | Multi-agent coordinator orchestrating swarms | coordinator/coordinatorMode.ts |
PROACTIVE | Proactive task suggestions + SleepTool | tools/SleepTool/ |
AGENT_TRIGGERS | Cron-based agent scheduling via ScheduleCronTool | tools/ScheduleCronTool/ |
WORKFLOW_SCRIPTS | Multi-step scripted workflow automation | commands/workflows/ |
REACTIVE_COMPACT | Automatic reactive context window compaction | services/compact/reactiveCompact.ts |
CONTEXT_COLLAPSE | Aggressive context collapsing for very long sessions | services/contextCollapse/ |
MONITOR_TOOL | Task monitoring tool for background agent supervision | tools/ |
EXPERIMENTAL_SKILL_SEARCH | Fuzzy search for skill discovery | skills/ |
Startup Sequence
The main.tsx entry point uses a clever trick to optimize startup time: several expensive async operations are fired before imports complete using top-level side effects. Here is the sequence in chronological order:
-
1Profile checkpoint
profileCheckpoint('main_tsx_entry')marks the entry timestamp before any module evaluation begins, enabling startup profiling. -
2MDM raw read (parallel)
startMdmRawRead()fires MDM policy subprocess reads (plutilon macOS,reg queryon Windows) so they run concurrently with the remaining ~135ms of module initialization. -
3Keychain prefetch (parallel)
startKeychainPrefetch()fires both macOS keychain reads (OAuth token + legacy API key) in parallel. Without this, they would execute sequentially via sync spawn, costing ~65ms on every macOS startup. -
4CLI argument parsing Commander.js parses
argvand routes to the appropriate sub-command (REPL, print, chat, etc.). -
5Initialization
init()bootstraps auth, config, GrowthBook feature flags, MCP connections, tool list, and system prompt. -
6REPL launch or headless query Depending on the invocation, either launches the Ink REPL (
launchRepl()) or runs a single query throughQueryEngine.
Performance insight: The aggressive parallelism in startup (MDM reads, keychain reads, GrowthBook fetches, MCP URL prefetch, AWS/GCP credential prefetch) is a deliberate design choice. Many of these are I/O-bound and take 50-200ms each. By firing them all before the main initialization flow, Claude Code achieves a responsive first-prompt experience even on slow systems.
Key Design Decisions
Several architectural choices stand out when reading the codebase:
- Bun as bundler + runtime: By using Bun's macro features for conditional compilation, Anthropic can strip out beta features (like 'coordinator mode') entirely at build time, keeping the final binary fast and lean.
- React for the terminal: React in a terminal sounds bizarre until you try to build a streaming, interactive CLI with nested tool progress and inline diffs. Then it makes total sense.
- Single source of truth for tools: The same tool object defines the API schema, validates parameters, checks user permissions, renders the UI, and reports progress. It eliminates massive amounts of boilerplate.
- Two-path architecture: Code reuse at its finest. The interactive REPL and the programmatic SDK both wrap the exact same core pipeline (
query.ts), just rendering the output differently. - MCP as the extension mechanism: Instead of building a bespoke plugin system, they lean hard into the Model Context Protocol. You can hook into Claude Code using any external tool server without touching the core code.