The comprehensive guide to AI-powered development with Anthropic's agentic coding CLI
$ claude
Installation, first launch, interaction loop
Token window, zones, recovery, cost awareness
Plan mode, rewind, model selection, prompting
Memory hierarchy, compounding knowledge
Specialized sub-processes, reusable modules
Event automation, external tools, settings
Prompting, mistakes to avoid, trust calibration
Practical integration, migration checklist
An agentic CLI tool that lives in your terminal and operates directly on your codebase.
It doesn't guess your next line — it understands and executes full tasks.
Describe what you want in natural language. Claude reads, plans, and implements.
Reads files, runs commands, writes code, creates commits — end to end.
$ claude Claude> Refactor the auth module to use JWT tokens and add tests ✓ Reading src/auth/... (4 files) ✓ Implementing JWT auth provider ✓ Writing tests (12 test cases) ✓ All tests passing
| Feature | Copilot | Cursor | Claude Code |
|---|---|---|---|
| Approach | Inline suggestions | Chat + autocomplete | Full task execution |
| Context | Current file | Open files + index | Entire repo + tools |
| Customization | Limited | Rules files | CLAUDE.md + agents + hooks + MCP |
| Task Scope | Line/function | File-level edits | Multi-file orchestration |
| Runs Commands | No | Limited | Yes — builds, tests, git |
Key differentiator: the structured context system (CLAUDE.md, agents, skills, hooks) makes Claude learn your project over time.
PS> choco install nodejs-lts PS> $env:NODE_TLS_REJECT_UNAUTHORIZED=0
Required prerequisite. Use Chocolatey (already available as Business users).
PS> irm https://claude.ai/install.ps1 | iex PS> claude --version claude-code v2.x.x
Official install script. Works in PowerShell and WSL.
Config location: %USERPROFILE%\.claude\ (Windows) or ~/.claude/ (WSL).
Configure your Bedrock connection, then launch. Two options:
[Environment]::SetEnvironmentVariable( "CLAUDE_CODE_USE_BEDROCK", "1", "User") [Environment]::SetEnvironmentVariable( "AWS_REGION", "eu-central-1", "User") [Environment]::SetEnvironmentVariable( "ANTHROPIC_MODEL", "eu.anthropic.claude-opus-4-6-v1", "User") [Environment]::SetEnvironmentVariable( "ANTHROPIC_SMALL_FAST_MODEL", "eu.anthropic.claude-haiku-4-5-20251001-v1:0", "User") [Environment]::SetEnvironmentVariable( "AWS_BEARER_TOKEN_BEDROCK", "YOUR_API_KEY", "User") # Proxy — pick one: [Environment]::SetEnvironmentVariable( "HTTPS_PROXY", "http://localhost:8888", "User") # ProxyProxy [Environment]::SetEnvironmentVariable( "HTTPS_PROXY", "http://185.46.212.88:9400", "User") # Zscaler direct
Stored in Windows registry. Available to all apps. Restart terminal after.
Use localhost:8888 if you run ProxyProxy, otherwise the Zscaler proxy directly.
// ~/.claude/settings.json { "env": { "CLAUDE_CODE_USE_BEDROCK": "1", "AWS_REGION": "eu-central-1", "ANTHROPIC_MODEL": "eu.anthropic.claude-opus-4-6-v1", "ANTHROPIC_SMALL_FAST_MODEL": "eu.anthropic.claude-haiku-4-5-20251001-v1:0", "CLAUDE_CODE_MAX_OUTPUT_TOKENS": "8192", "MAX_THINKING_TOKENS": "8192", "AWS_BEARER_TOKEN_BEDROCK": "YOUR_API_KEY", "HTTPS_PROXY": "http://localhost:8888" } }
Scoped to Claude Code. No restart needed. Easy to share across team.
Proxy: use localhost:8888 (ProxyProxy) or 185.46.212.88:9400 (Zscaler direct).
Then: cd your-project → claude → start describing tasks.
PS> node --version v22.x.x
PS> claude --version claude-code v2.x.x
Navigate to a project folder and run claude. Ask it: What does this project do?
Ask Claude: List all TODO comments in this project and see how it searches your codebase.
Take 5 minutes. Raise your hand if you hit issues.
Tell Claude what you need in natural language.
Reads files, understands structure, forms a plan.
Shows you the changes it wants to make as diffs.
Accept, reject, or ask for modifications. Then verify and commit.
| Command | Action |
|---|---|
/help | Show all available commands |
/clear | Reset conversation — fresh context window |
/compact | Summarize conversation to reclaim tokens |
/status | Show context usage, model, session info |
/exit | End the session |
/plan | Enter read-only Plan Mode |
/rewind | Undo the last Claude action |
All slash commands are available at any point in the conversation.
! PrefixRun shell commands directly.! npm test
@ PrefixReference files explicitly.@src/auth.ts fix the bug
Cancel current operation without exiting the session.
Quick rewind — undo last action.
Toggle between plan and code mode inline.
Recall previous prompts for editing.
Type /status to see your model, context usage, and session info.
Pick any file and ask: @README.md summarize this
Type ! git log --oneline -5 to run a command without leaving Claude.
Type /plan, then ask Claude to plan a small change. Notice it can only read, not write.
Type /clear to reset your conversation. Notice the counter resets.
Press Up Arrow to recall your previous prompt and re-edit it.
Take 5 minutes to explore. Get comfortable with the controls.
Asks before every file write and command. Recommended for beginners.
Auto-approves file edits, still asks for commands. --auto-accept-edits
Read-only. Claude can only read files and search — no writes at all.
--dangerously-skip-permissions — No guardrails. CI/CD only. Never use interactively.
⚠ Progressive trust: start with Default, move to Auto-Accept as you build confidence.
Tokens are the fundamental unit of how LLMs read and generate text — not characters, not words.
Before processing, all text is split into tokens — small chunks like words, sub-words, or punctuation. refactoring → ref + actor + ing
1 token ≈ ¾ of a word in English. 100 tokens ≈ 75 words. Code is typically more token-dense than prose.
Every interaction has a token budget (context window). Your prompts, Claude's responses, and all file reads consume tokens from this budget.
You pay per token (input + output). Larger context = slower responses and higher cost. Keeping context lean saves both.
Every API call sends the entire conversation so far as input, plus gets new output back. Both cost money.
Everything sent to the model: system prompt, CLAUDE.md, your messages, file contents, tool results, prior responses. Grows with every turn.
What the model generates: responses, code, tool calls. 5× more expensive than input — e.g. Sonnet 4.6: $3/MTok in vs $15/MTok out.
Repeated context is cached — cache hits cost only $0.30/MTok (Sonnet) vs $3 base. That's a 90% saving.
Why context size drives cost
Input cost compounds every turn — /compact and /clear are your cost-saving tools.
The #1 most important concept in Claude Code. Master this or nothing else matters.
Everything Claude knows: prompts, file contents, tool outputs, all in a ~200K token window.
When context fills up, Claude loses earlier information. Performance degrades before the hard limit.
Your prompts + Responses + File reads + Command outputs = CONTEXT
~200,000 tokens | ~150K words | ~500 pages
Use /status to check your current context usage at any time.
Summarizes conversation history. Preserves key decisions. Frees ~40-60% of tokens. Use often.
Complete reset. Fresh context window. Use when switching tasks or context is heavily polluted.
Start new sessions for distinct tasks. Reference specific files with @. Keep prompts focused.
Claude> /compact ✓ Compressed 147K tokens → 23K tokens Key context preserved: auth refactor in progress, 8/12 tests passing
| Action | Token Cost | Tips |
|---|---|---|
| Small file read (<100 lines) | Low | Fine to read many small files |
| Large file read (500+ lines) | High | Use @file:line for specific sections |
| Shell command output | Medium | Pipe to head or grep to limit output |
| Search results (grep/glob) | High | Be specific with search patterns |
| Your prompts | Low | Longer is fine if it is precise |
| Claude responses | Medium | Ask for concise output when possible |
Rule of thumb: every file Claude reads costs ~1 token per character. A 200-line file is roughly 4-6K tokens.
| Model | Input / 1M tokens | Output / 1M tokens | Best For |
|---|---|---|---|
| Haiku 4.5 | $1.00 | $5.00 | Boilerplate, simple edits |
| Sonnet 4.6 | $3.00 | $15.00 | Features, daily coding |
| Opus 4.6 | $5.00 | $25.00 | Architecture, complex logic |
30-60 min focused work: $0.10 – $0.50
Multi-hour deep refactor: $1.00 – $5.00
Precise prompts = fewer iterations = less context used = lower cost.
Regular compaction avoids hitting limits and reduces re-processing tokens.
Use Haiku for boilerplate. Sonnet for features. Opus only when needed.
Each MCP server adds tool definitions to every request. Only enable what you need.
Group related changes in one prompt instead of multiple small requests.
Start new sessions for unrelated tasks instead of carrying stale context.
Run /status. Ask Claude to read a large file. Run /status again. See how token usage jumped.
Run /compact. Then /status again. Notice the token count dropped while context was preserved.
Ask Claude a complex question involving multiple files. Check /status — observe input tokens accumulating with each turn.
Run /clear to reset completely. Compare the token count. This is your most powerful cost-saving tool.
Take 5 minutes. Goal: internalize how context grows and how to manage it.
Use /plan for read-only exploration before making changes.
Made a wrong turn? Undo with increasing scope:
When Claude proposes changes, simply say “no” or press n. Nothing is written.
Undo the last Claude action. Also: press Esc twice. Rolls back conversation and file changes.
Nuclear option. Use git restore . or git stash to revert all uncommitted changes.
Claude> /rewind ✓ Reverted last action: edited src/auth.ts (38 lines changed) Conversation rolled back to previous state
| Model | Speed | Best For | When to Use |
|---|---|---|---|
| Haiku 4.5 | Fastest | Boilerplate, renaming, formatting | Simple mechanical tasks |
| Sonnet 4.6 | Balanced | Features, bug fixes, tests | Daily driver for most work |
| Opus 4.6 | Thorough | Architecture, complex refactors | High-stakes design decisions |
CLAUDE_CODE_USE_BEDROCK=1 AWS_REGION=eu-central-1 ANTHROPIC_MODEL=eu.anthropic.claude-opus-4-6-v1 ANTHROPIC_SMALL_FAST_MODEL=eu.anthropic.claude-haiku-4-5-20251001-v1:0
Opus 4.6 is the default model. Haiku 4.5 is used for fast sub-tasks. All traffic routes through AWS Bedrock EU (Frankfurt).
Three levels of CLAUDE.md — more specific always wins.
~/.claude/CLAUDE.md
Your personal preferences across all projects. Coding style, preferred tools, language.
/project/CLAUDE.md
Shared team rules. Checked into git. Build commands, architecture decisions, conventions.
/project/.claude/CLAUDE.md
Your personal overrides for this project. Git-ignored. Experimental preferences.
Resolution order: Local > Project > Global. Team rules in Project, personal tweaks in Local.
Start small. You need exactly two things:
# My Project ## Commands - Build: `npm run build` - Test: `npm test` - Lint: `npm run lint`
Project name and build/test commands. That is enough to begin.
Only add a new rule when Claude makes the same mistake twice. No premature rules.
Just ask: Generate a CLAUDE.md for this project — Claude will analyze your repo structure, detect build tools, frameworks, and conventions, then produce a solid starting file you can refine.
~5 rules: project name, commands, basic conventions.
~20 rules: architecture patterns, naming conventions, test requirements.
~50 rules: deep domain knowledge, edge cases, performance guidelines.
Your CLAUDE.md becomes a living, growing knowledge base. Claude gets better at your project every week.
Ask Claude: Generate a CLAUDE.md for this project. Let it analyze your repo and create a starting file.
Open the generated CLAUDE.md. Does it have your build commands right? Your test runner? Edit if needed.
Ask Claude to do something it gets wrong (e.g. wrong import style). Correct it, then add the rule to CLAUDE.md.
Run /clear, start a new conversation, and repeat the task. Claude should now follow your rule.
Take 10 minutes. This is the single most impactful thing you can do for your productivity.
.claude/ CLAUDE.md # Local memory (git-ignored) settings.json # Team settings (checked in) settings.local.json # Personal settings (git-ignored) agents/ # Specialized agent definitions reviewer.md debugger.md commands/ # Custom slash commands deploy.md review.md hooks/ # Event-driven automation scripts pre-edit.sh rules/ # Additional rule files skills/ # Reusable knowledge modules
settings.jsonTeam-wide settings. Checked into git. Defines allowed tools, MCP servers, default model.
settings.local.jsonPersonal overrides. Git-ignored. Your preferred model, extra MCP servers, custom permissions.
Default permissions. Approve everything. Learn the tool.
Auto-accept edits. Trust file changes but review commands.
Allow specific commands (npm, git). Carefully scoped auto-permissions.
Specialized sub-processes with isolated context. Not role-playing — actual separate Claude instances.
One Claude session does everything: reads code, runs tests, reviews, writes docs. Context fills up fast. Quality drops.
Main Claude delegates to focused experts. Reviewer agent checks code. Test agent writes tests. Each has clean context.
Key insight: agents have context isolation. They start fresh, do one job, return results. Your main context stays clean.
# .claude/agents/reviewer.md
---
name: Code Reviewer
description: Reviews code changes for quality and correctness
model: sonnet
tools:
- Read
- Grep
- Glob
---
You are a senior code reviewer. Analyze the provided changes for:
- Logic errors and edge cases
- Security vulnerabilities
- Performance issues
- Style consistency with project conventions
Be concise. Flag only real issues, not nitpicks.
Invoke with: /agent reviewer <task description>
--- name: Reviewer model: sonnet tools: [Read, Grep] --- Review for bugs, security, performance.
--- name: Debugger model: opus tools: [Read, Bash, Grep] --- Investigate failures. Read logs, form hypotheses, verify with tests.
--- name: Architect model: opus tools: [Read, Glob, Grep] --- Analyze codebase structure. Propose design improvements. Read-only analysis.
Start with 2-3 agents. Add more as you discover repeated patterns in your workflow.
Reusable knowledge modules that Claude can invoke on demand.
Skills are markdown files in .claude/skills/ that contain domain knowledge, patterns, and instructions. Claude loads them when relevant.
Shared patterns across projects, complex domain logic, code generation templates, API integration guides.
Skills have ~56% reliability for being invoked when needed. Never put critical rules only in skills. Critical rules belong in CLAUDE.md where they are always loaded.
Generates distinctive, production-grade frontend interfaces — no generic AI aesthetics.
claude plugin add \ frontend-design
One command. Adds the skill to your Claude Code instance.
Bold typography, distinctive color palettes, high-impact animations, responsive layouts. Claude picks an aesthetic direction automatically.
Landing pages, dashboards, UI components, settings panels — any frontend work where visual quality matters.
Claude> Create a dashboard for a music streaming app Claude> Build a landing page for an AI security startup Claude> Design a settings panel with dark mode
A meta-prompting system that turns vague ideas into shipping code through structured research, planning, and verification.
Context rot — quality degrades as Claude's context fills up during long development sessions. GSD keeps each task in a fresh 200K context.
npx get-shit-done-cc@latest
Interactive setup: choose Claude Code runtime and global vs local install.
Building features from scratch, large multi-phase projects, architectural decisions requiring research, anything too big for a single Claude session.
Spawns specialized sub-agents for research, planning, and execution. Each gets fresh context. Atomic Git commits per task. Parallel wave execution.
| Command | Purpose |
|---|---|
/gsd:new-project | Initialize — questions, research, requirements, roadmap |
/gsd:discuss-phase N | Shape implementation approach before planning |
/gsd:plan-phase N | Research domain, create atomic task plans, verify against goals |
/gsd:execute-phase N | Build in parallel waves — fresh context per task |
/gsd:verify-work N | Manual UAT with automated debugging |
/gsd:quick | Ad-hoc tasks (bug fixes, small features) — skip full ceremony |
/gsd:progress | Check current status and next action |
For existing codebases: run /gsd:map-codebase first so GSD learns your patterns before planning.
claude plugin add frontend-design
Then ask: Create a simple landing page for a coffee shop
npx get-shit-done-cc@latest
Then try: /gsd:help to see all available commands.
Run /gsd:quick and give it a small task like adding a utility function or fixing a comment.
Ask Claude a complex question and watch the status bar — notice when it spawns sub-agents to research in parallel.
Take 10 minutes. These tools multiply your productivity significantly.
Create repeatable workflows in .claude/commands/
# .claude/commands/review.md
Review the current git diff for:
1. Logic errors and edge cases
2. Missing error handling
3. Security vulnerabilities
4. Test coverage gaps
Format as a checklist with severity levels.
/review — runs the command with full context.
Use $ARGUMENTS placeholder for dynamic input: /review src/auth.ts
Shell scripts triggered by Claude Code events. Zero token cost.
Runs before Claude uses a tool. Can block dangerous operations.
Runs after a tool completes. Auto-format, auto-lint, inject context.
Runs when you send a message. Inject git status, branch info, metadata.
# settings.json
"hooks": {
"PostToolUse": [{
"matcher": "Edit",
"command": "npx prettier --write \"$FILEPATH\""
}]
}
"PostToolUse": [{
"matcher": "Edit",
"command": "prettier --write $F"
}]"PreToolUse": [{
"matcher": "Bash",
"command": "check-no-secrets.sh"
}]Block commands that might leak secrets.
"UserPromptSubmit": [{
"command": "git-context.sh"
}]Auto-inject branch name and recent commits into every prompt.
Hooks run as shell scripts outside the LLM — they consume zero tokens and execute instantly.
Model Context Protocol — external tool integration for Claude Code.
Code-aware navigation. Understands symbols, references, types.
Up-to-date library documentation. No hallucinated APIs.
Browser automation. Test UIs, scrape data, interact with web apps.
Direct database access. Query, inspect schema, debug data issues.
# settings.json
"mcpServers": {
"context7": { "command": "npx", "args": ["-y", "@context7/mcp"] }
}
Each MCP server adds tool definitions to context. Only enable servers you actively use.
Where should this rule or behavior live?
Needed every session?
Project conventions, build commands, style rules.
Should auto-trigger on events?
Format on save, inject context, security gates.
Needs external access?
Databases, APIs, browsers, documentation.
Repeatable workflow?
Review, deploy, test suites, migrations.
Needs isolated context?
Code review, debugging, architecture analysis.
Shared knowledge?
Cross-project patterns, domain expertise.
For complex requests, use XML tags to organize your prompt:
<instruction> Refactor the payment processing module to support Stripe webhooks. </instruction> <context> - Current payment flow is in src/payments/ - We use Stripe API v2023-10 - Webhook endpoint should be POST /api/webhooks/stripe </context> <constraints> - Must be backward compatible with existing orders - Add idempotency checks for duplicate events - Include comprehensive error handling </constraints> <output> Create the webhook handler, add tests, update API routes. </output>
“Fix the auth bug”
“Fix the JWT validation in src/auth/middleware.ts — tokens with expired refresh claims are not returning 401. Add test coverage.”
What needs to happen? Be specific about the outcome.
Which files, modules, functions? Point Claude to the right place.
Constraints, patterns, conventions to follow.
How to confirm success? Tests, commands, expected behavior.
AI-assisted development is only as fast as your build → test → verify cycle. If Claude can't see the result, it can't improve it.
If build + test takes 5 seconds, Claude can iterate 10+ times in a session. It reads errors, fixes, and retries autonomously.
If build takes 2+ minutes, Claude can only try 2–3 times before context fills up. It starts guessing instead of verifying.
If Claude can't run your tests at all, it's writing code without feedback. Quality drops dramatically.
The #1 investment for AI-assisted dev: make your feedback loop as fast as possible.
Claude's verification loop is its superpower. It doesn't just generate — it runs, reads output, and fixes.
mvn compile, gradle build — reads compiler errors, fixes them, rebuilds. Iterates until green.
Executes mvn test, parses failures, fixes the code or the test, reruns. Full TDD loop.
Paste a stack trace or point to a log file. Claude traces the root cause through your codebase.
Rename, extract, move — across files. Then runs tests to verify nothing broke. Safer than IDE refactoring for complex changes.
Generates unit tests, integration tests, edge cases. If a test fails, it determines whether the test or the code is wrong.
Code reviews, architecture explanations, onboarding docs. Ask Explain the request lifecycle in this Spring Boot app.
mvn test -pl module -Dtest=MyTest## Commands - Build: `mvn compile -q` - Test: `mvn test -q` - Single test: `mvn test -Dtest=$CLASS` - Run: `mvn spring-boot:run` ## Conventions - Java 17+, Spring Boot 3.x - Use constructor injection - Tests with JUnit 5 + Mockito - Always run tests after changes
Jumping straight to coding without understanding the problem.
Letting context fill up until Claude forgets what you are building.
“Fix it” instead of specifying what, where, and how to verify.
Approving every change without reading the diffs.
Making changes without committing checkpoints.
Starting with full auto mode before learning the tool.
Combining unrelated work in one session, polluting context.
Having a conversation instead of giving task-oriented instructions.
Read every change before accepting. Claude is capable but not infallible.
Do not wait for context warnings. Compact after completing each sub-task.
Include what, where, how, and verification criteria in every request.
Explore and understand before making changes. /plan is your best friend.
Invest in your memory file. It compounds over time and makes Claude smarter for your project.
Fix the tests
Fix the failing unit test in src/utils/date.test.ts — the formatDate function returns UTC but the test expects local time. Keep the existing test structure.
Pick a real task in your project. Write a prompt that includes: (1) what to change, (2) which files, (3) expected behavior, (4) constraints. Run it and compare the result to a vague version of the same request.
Take 5 minutes. The difference in output quality will surprise you.
Not all code changes deserve the same level of scrutiny.
| Change Type | Review Level | Approach |
|---|---|---|
| Boilerplate / scaffolding | Skim | Quick check that structure is right |
| Standard features | Review | Read the diff, verify logic |
| Business logic | Deep Review | Line-by-line, check edge cases |
| Security / auth / payments | Maximum | Deep review + security tools + tests |
Start with /plan. Review what needs to be done. Break into discrete tasks.
One task per session. Be specific. Use /compact between sub-tasks.
Review diffs carefully. Run tests. Commit after each completed feature.
Update CLAUDE.md with discoveries. Note open issues. Clean context for tomorrow.
The shift from chatbot to context system. Four layers working together:
Persistent memory. Project rules, conventions, commands. The foundation.
Domain knowledge modules. Loaded when relevant to the current task.
Event-driven scripts. Zero tokens. Auto-format, inject context, enforce rules.
Every correction becomes a rule. The system gets smarter over weeks and months.
You are the main thread. Orchestrate 2-5 Claude instances on independent tasks.
claude — Backend API refactor
Working on src/api/...claude — Frontend components
Working on src/components/...claude — Test suite expansion
Working on tests/...Works best when tasks touch different files. Merge carefully when they overlap.
Context is everything. Monitor it, manage it, compact it. This is the #1 skill.
Plan before you code. Start with /plan. Understand before you change.
Build CLAUDE.md over time. Never correct Claude twice for the same mistake.
Always review diffs. Trust but verify. AI is a capable junior developer.
Be specific in prompts. WHAT + WHERE + HOW + VERIFY = great results.
docs.anthropic.com/claude-codeQuestions, discussion, and next steps
$ claude Claude> Ready to help. What are we building?
Built with care for enterprise teams adopting AI-powered development.