Enterprise Training Program

Mastering
Claude Code

The comprehensive guide to AI-powered development with Anthropic's agentic coding CLI

$ claude   
Overview

Agenda

🚀 Getting Started

Installation, first launch, interaction loop

🧠 Context System

Token window, zones, recovery, cost awareness

🛠️ Core Workflows

Plan mode, rewind, model selection, prompting

📝 CLAUDE.md & Memory

Memory hierarchy, compounding knowledge

🤖 Agents & Skills

Specialized sub-processes, reusable modules

⚙️ Hooks, MCP & Config

Event automation, external tools, settings

💡 Best Practices

Prompting, mistakes to avoid, trust calibration

🎯 Daily Workflow

Practical integration, migration checklist

Foundations

What Is Claude Code?

An agentic CLI tool that lives in your terminal and operates directly on your codebase.

Not Autocomplete

It doesn't guess your next line — it understands and executes full tasks.

Conversation-Based

Describe what you want in natural language. Claude reads, plans, and implements.

Full Task Execution

Reads files, runs commands, writes code, creates commits — end to end.

$ claude
Claude> Refactor the auth module to use JWT tokens and add tests
 Reading src/auth/... (4 files)
 Implementing JWT auth provider
 Writing tests (12 test cases)
 All tests passing
Comparison

Why Claude Code?

FeatureCopilotCursorClaude Code
ApproachInline suggestionsChat + autocompleteFull task execution
ContextCurrent fileOpen files + indexEntire repo + tools
CustomizationLimitedRules filesCLAUDE.md + agents + hooks + MCP
Task ScopeLine/functionFile-level editsMulti-file orchestration
Runs CommandsNoLimitedYes — builds, tests, git

Key differentiator: the structured context system (CLAUDE.md, agents, skills, hooks) makes Claude learn your project over time.

Setup

Installation

Step 1: Install Node.js

PS> choco install nodejs-lts
PS> $env:NODE_TLS_REJECT_UNAUTHORIZED=0

Required prerequisite. Use Chocolatey (already available as Business users).

Step 2: Install Claude Code

PS> irm https://claude.ai/install.ps1 | iex
PS> claude --version
claude-code v2.x.x

Official install script. Works in PowerShell and WSL.

Config location: %USERPROFILE%\.claude\ (Windows) or ~/.claude/ (WSL).

Setup

First Launch

Configure your Bedrock connection, then launch. Two options:

Option A: PowerShell (Windows env vars)

[Environment]::SetEnvironmentVariable(
  "CLAUDE_CODE_USE_BEDROCK", "1", "User")
[Environment]::SetEnvironmentVariable(
  "AWS_REGION", "eu-central-1", "User")
[Environment]::SetEnvironmentVariable(
  "ANTHROPIC_MODEL",
  "eu.anthropic.claude-opus-4-6-v1", "User")
[Environment]::SetEnvironmentVariable(
  "ANTHROPIC_SMALL_FAST_MODEL",
  "eu.anthropic.claude-haiku-4-5-20251001-v1:0",
  "User")
[Environment]::SetEnvironmentVariable(
  "AWS_BEARER_TOKEN_BEDROCK",
  "YOUR_API_KEY", "User")

# Proxy — pick one:
[Environment]::SetEnvironmentVariable(
  "HTTPS_PROXY",
  "http://localhost:8888", "User")     # ProxyProxy
[Environment]::SetEnvironmentVariable(
  "HTTPS_PROXY",
  "http://185.46.212.88:9400", "User") # Zscaler direct

Stored in Windows registry. Available to all apps. Restart terminal after.
Use localhost:8888 if you run ProxyProxy, otherwise the Zscaler proxy directly.

Option B: settings.json (Claude Code only)

// ~/.claude/settings.json
{
  "env": {
    "CLAUDE_CODE_USE_BEDROCK": "1",
    "AWS_REGION": "eu-central-1",
    "ANTHROPIC_MODEL":
      "eu.anthropic.claude-opus-4-6-v1",
    "ANTHROPIC_SMALL_FAST_MODEL":
      "eu.anthropic.claude-haiku-4-5-20251001-v1:0",
    "CLAUDE_CODE_MAX_OUTPUT_TOKENS": "8192",
    "MAX_THINKING_TOKENS": "8192",
    "AWS_BEARER_TOKEN_BEDROCK": "YOUR_API_KEY",
    "HTTPS_PROXY": "http://localhost:8888"
  }
}

Scoped to Claude Code. No restart needed. Easy to share across team.
Proxy: use localhost:8888 (ProxyProxy) or 185.46.212.88:9400 (Zscaler direct).

Then: cd your-projectclaude → start describing tasks.

Hands-On

Exercise 1: Get Up and Running

1. Verify Node.js

PS> node --version
v22.x.x

2. Verify Claude Code

PS> claude --version
claude-code v2.x.x

3. Launch Your First Session

Navigate to a project folder and run claude. Ask it: What does this project do?

4. Your First Task

Ask Claude: List all TODO comments in this project and see how it searches your codebase.

Take 5 minutes. Raise your hand if you hit issues.

Core Concept

The Interaction Loop

Describe
Analyze
Propose
Review
Decide
Verify
Commit

You Describe

Tell Claude what you need in natural language.

Claude Analyzes

Reads files, understands structure, forms a plan.

Claude Proposes

Shows you the changes it wants to make as diffs.

You Decide

Accept, reject, or ask for modifications. Then verify and commit.

Reference

Essential Commands

CommandAction
/helpShow all available commands
/clearReset conversation — fresh context window
/compactSummarize conversation to reclaim tokens
/statusShow context usage, model, session info
/exitEnd the session
/planEnter read-only Plan Mode
/rewindUndo the last Claude action

All slash commands are available at any point in the conversation.

Reference

Quick Actions & Shortcuts

! Prefix

Run shell commands directly.
! npm test

@ Prefix

Reference files explicitly.
@src/auth.ts fix the bug

Ctrl+C

Cancel current operation without exiting the session.

Esc × 2

Quick rewind — undo last action.

Shift+Tab

Toggle between plan and code mode inline.

Up Arrow

Recall previous prompts for editing.

Hands-On

Exercise 2: Try the Controls

1. Check Status

Type /status to see your model, context usage, and session info.

2. Use @ Reference

Pick any file and ask: @README.md summarize this

3. Run a Shell Command

Type ! git log --oneline -5 to run a command without leaving Claude.

4. Try Plan Mode

Type /plan, then ask Claude to plan a small change. Notice it can only read, not write.

5. Clear Context

Type /clear to reset your conversation. Notice the counter resets.

6. Recall Prompt

Press Up Arrow to recall your previous prompt and re-edit it.

Take 5 minutes to explore. Get comfortable with the controls.

Safety

Permission Modes

Default

Asks before every file write and command. Recommended for beginners.

Auto-Accept Edits

Auto-approves file edits, still asks for commands. --auto-accept-edits

Plan Mode

Read-only. Claude can only read files and search — no writes at all.

Full Auto / Bypass

--dangerously-skip-permissions — No guardrails. CI/CD only. Never use interactively.

⚠ Progressive trust: start with Default, move to Auto-Accept as you build confidence.

Critical Concept

What Are Tokens?

Tokens are the fundamental unit of how LLMs read and generate text — not characters, not words.

Text → Tokens

Before processing, all text is split into tokens — small chunks like words, sub-words, or punctuation. refactoringref + actor + ing

Rule of Thumb

1 token ≈ ¾ of a word in English. 100 tokens ≈ 75 words. Code is typically more token-dense than prose.

Why It Matters

Every interaction has a token budget (context window). Your prompts, Claude's responses, and all file reads consume tokens from this budget.

Cost & Speed

You pay per token (input + output). Larger context = slower responses and higher cost. Keeping context lean saves both.

Critical Concept

Token Economics

Every API call sends the entire conversation so far as input, plus gets new output back. Both cost money.

Input Tokens

Everything sent to the model: system prompt, CLAUDE.md, your messages, file contents, tool results, prior responses. Grows with every turn.

Output Tokens

What the model generates: responses, code, tool calls. 5× more expensive than input — e.g. Sonnet 4.6: $3/MTok in vs $15/MTok out.

Prompt Caching

Repeated context is cached — cache hits cost only $0.30/MTok (Sonnet) vs $3 base. That's a 90% saving.

Why context size drives cost

Turn 1: 5K in
Turn 5: 40K in
Turn 10: 100K in
Turn 20: 200K in

Input cost compounds every turn — /compact and /clear are your cost-saving tools.

Critical Concept

Context Management

The #1 most important concept in Claude Code. Master this or nothing else matters.

Context = Working Memory

Everything Claude knows: prompts, file contents, tool outputs, all in a ~200K token window.

It Is Finite

When context fills up, Claude loses earlier information. Performance degrades before the hard limit.

Your prompts + Responses + File reads + Command outputs = CONTEXT
         ~200,000 tokens  |  ~150K words  |  ~500 pages
Critical Concept

Context Zones

Green Zone: 0–50%Full performance
Yellow Zone: 50–75%Consider /compact
Red Zone: 75–90%Run /compact now
Critical: 90%+/clear or start new session

Use /status to check your current context usage at any time.

Critical Concept

Context Recovery Strategies

/compact

Summarizes conversation history. Preserves key decisions. Frees ~40-60% of tokens. Use often.

/clear

Complete reset. Fresh context window. Use when switching tasks or context is heavily polluted.

Targeted Approach

Start new sessions for distinct tasks. Reference specific files with @. Keep prompts focused.

Claude> /compact
 Compressed 147K tokens → 23K tokens
Key context preserved: auth refactor in progress, 8/12 tests passing
Critical Concept

What Consumes Context?

ActionToken CostTips
Small file read (<100 lines)LowFine to read many small files
Large file read (500+ lines)HighUse @file:line for specific sections
Shell command outputMediumPipe to head or grep to limit output
Search results (grep/glob)HighBe specific with search patterns
Your promptsLowLonger is fine if it is precise
Claude responsesMediumAsk for concise output when possible

Rule of thumb: every file Claude reads costs ~1 token per character. A 200-line file is roughly 4-6K tokens.

Cost

Cost Awareness

ModelInput / 1M tokensOutput / 1M tokensBest For
Haiku 4.5$1.00$5.00Boilerplate, simple edits
Sonnet 4.6$3.00$15.00Features, daily coding
Opus 4.6$5.00$25.00Architecture, complex logic

Typical Session

30-60 min focused work: $0.10 – $0.50

Heavy Session

Multi-hour deep refactor: $1.00 – $5.00

Cost

Cost Optimization Strategies

1. Be Specific

Precise prompts = fewer iterations = less context used = lower cost.

2. Use /compact

Regular compaction avoids hitting limits and reduces re-processing tokens.

3. Right Model

Use Haiku for boilerplate. Sonnet for features. Opus only when needed.

4. Limit MCP Servers

Each MCP server adds tool definitions to every request. Only enable what you need.

5. Batch Operations

Group related changes in one prompt instead of multiple small requests.

6. Fresh Sessions

Start new sessions for unrelated tasks instead of carrying stale context.

Hands-On

Exercise 3: Context in Action

1. Watch Context Grow

Run /status. Ask Claude to read a large file. Run /status again. See how token usage jumped.

2. Compact It

Run /compact. Then /status again. Notice the token count dropped while context was preserved.

3. Cost Experiment

Ask Claude a complex question involving multiple files. Check /status — observe input tokens accumulating with each turn.

4. Fresh Start

Run /clear to reset completely. Compare the token count. This is your most powerful cost-saving tool.

Take 5 minutes. Goal: internalize how context grows and how to manage it.

Workflow

Plan Mode

Use /plan for read-only exploration before making changes.

Allowed in Plan Mode

  • Read files and directories
  • Search codebase (grep, glob)
  • Analyze architecture
  • Generate implementation plans

Blocked in Plan Mode

  • Write or edit files
  • Run shell commands
  • Create commits
  • Any destructive operations
“80% of tasks should start in Plan Mode. Understand before you act.”— Boris Cherny, Anthropic
Workflow

Rewind — Your Safety Net

Made a wrong turn? Undo with increasing scope:

Level 1: Reject

When Claude proposes changes, simply say “no” or press n. Nothing is written.

Level 2: /rewind

Undo the last Claude action. Also: press Esc twice. Rolls back conversation and file changes.

Level 3: git restore

Nuclear option. Use git restore . or git stash to revert all uncommitted changes.

Claude> /rewind
 Reverted last action: edited src/auth.ts (38 lines changed)
Conversation rolled back to previous state
Workflow

Model Selection

ModelSpeedBest ForWhen to Use
Haiku 4.5FastestBoilerplate, renaming, formattingSimple mechanical tasks
Sonnet 4.6BalancedFeatures, bug fixes, testsDaily driver for most work
Opus 4.6ThoroughArchitecture, complex refactorsHigh-stakes design decisions

Our Bedrock Configuration

CLAUDE_CODE_USE_BEDROCK=1
AWS_REGION=eu-central-1
ANTHROPIC_MODEL=eu.anthropic.claude-opus-4-6-v1
ANTHROPIC_SMALL_FAST_MODEL=eu.anthropic.claude-haiku-4-5-20251001-v1:0

Opus 4.6 is the default model. Haiku 4.5 is used for fast sub-tasks. All traffic routes through AWS Bedrock EU (Frankfurt).

Memory

Memory Hierarchy

Three levels of CLAUDE.md — more specific always wins.

Global

~/.claude/CLAUDE.md

Your personal preferences across all projects. Coding style, preferred tools, language.

Project

/project/CLAUDE.md

Shared team rules. Checked into git. Build commands, architecture decisions, conventions.

Local

/project/.claude/CLAUDE.md

Your personal overrides for this project. Git-ignored. Experimental preferences.

Resolution order: Local > Project > Global. Team rules in Project, personal tweaks in Local.

Memory

Minimum Viable CLAUDE.md

Start small. You need exactly two things:

# My Project

## Commands
- Build: `npm run build`
- Test: `npm test`
- Lint: `npm run lint`

Start Here

Project name and build/test commands. That is enough to begin.

Add Rules When Needed

Only add a new rule when Claude makes the same mistake twice. No premature rules.

Let Claude Write It For You

Just ask: Generate a CLAUDE.md for this project — Claude will analyze your repo structure, detect build tools, frameworks, and conventions, then produce a solid starting file you can refine.

Memory

CLAUDE.md as Compounding Memory

“Never correct Claude twice for the same mistake. The second time, add it to CLAUDE.md.”— Boris Cherny, Anthropic

Week 1

~5 rules: project name, commands, basic conventions.

Month 1

~20 rules: architecture patterns, naming conventions, test requirements.

Month 3

~50 rules: deep domain knowledge, edge cases, performance guidelines.

Your CLAUDE.md becomes a living, growing knowledge base. Claude gets better at your project every week.

Hands-On

Exercise 4: Your First CLAUDE.md

1. Generate It

Ask Claude: Generate a CLAUDE.md for this project. Let it analyze your repo and create a starting file.

2. Review & Customize

Open the generated CLAUDE.md. Does it have your build commands right? Your test runner? Edit if needed.

3. Add a Rule

Ask Claude to do something it gets wrong (e.g. wrong import style). Correct it, then add the rule to CLAUDE.md.

4. Verify It Sticks

Run /clear, start a new conversation, and repeat the task. Claude should now follow your rule.

Take 10 minutes. This is the single most impactful thing you can do for your productivity.

Configuration

The .claude/ Folder

.claude/
  CLAUDE.md            # Local memory (git-ignored)
  settings.json        # Team settings (checked in)
  settings.local.json  # Personal settings (git-ignored)
  agents/              # Specialized agent definitions
    reviewer.md
    debugger.md
  commands/            # Custom slash commands
    deploy.md
    review.md
  hooks/               # Event-driven automation scripts
    pre-edit.sh
  rules/               # Additional rule files
  skills/              # Reusable knowledge modules
Configuration

Settings & Permissions

settings.json

Team-wide settings. Checked into git. Defines allowed tools, MCP servers, default model.

settings.local.json

Personal overrides. Git-ignored. Your preferred model, extra MCP servers, custom permissions.

Progressive Permission Levels

Beginner

Default permissions. Approve everything. Learn the tool.

Intermediate

Auto-accept edits. Trust file changes but review commands.

Advanced

Allow specific commands (npm, git). Carefully scoped auto-permissions.

Agents

What Are Agents?

Specialized sub-processes with isolated context. Not role-playing — actual separate Claude instances.

Without Agents

One Claude session does everything: reads code, runs tests, reviews, writes docs. Context fills up fast. Quality drops.

With Agents

Main Claude delegates to focused experts. Reviewer agent checks code. Test agent writes tests. Each has clean context.

Key insight: agents have context isolation. They start fresh, do one job, return results. Your main context stays clean.

Agents

Creating Agents

# .claude/agents/reviewer.md
---
name: Code Reviewer
description: Reviews code changes for quality and correctness
model: sonnet
tools:
  - Read
  - Grep
  - Glob
---

You are a senior code reviewer. Analyze the provided changes for:
- Logic errors and edge cases
- Security vulnerabilities
- Performance issues
- Style consistency with project conventions

Be concise. Flag only real issues, not nitpicks.

Invoke with: /agent reviewer <task description>

Agents

Agent Examples

Code Reviewer

---
name: Reviewer
model: sonnet
tools: [Read, Grep]
---
Review for bugs,
security, performance.

Debugger

---
name: Debugger
model: opus
tools: [Read, Bash, Grep]
---
Investigate failures.
Read logs, form hypotheses,
verify with tests.

Architect

---
name: Architect
model: opus
tools: [Read, Glob, Grep]
---
Analyze codebase structure.
Propose design improvements.
Read-only analysis.

Start with 2-3 agents. Add more as you discover repeated patterns in your workflow.

Skills

What Are Skills?

Reusable knowledge modules that Claude can invoke on demand.

How They Work

Skills are markdown files in .claude/skills/ that contain domain knowledge, patterns, and instructions. Claude loads them when relevant.

When to Use

Shared patterns across projects, complex domain logic, code generation templates, API integration guides.

⚠ Reliability Warning

Skills have ~56% reliability for being invoked when needed. Never put critical rules only in skills. Critical rules belong in CLAUDE.md where they are always loaded.

Plugin Spotlight

Frontend Design Plugin

Generates distinctive, production-grade frontend interfaces — no generic AI aesthetics.

Install

claude plugin add \
  frontend-design

One command. Adds the skill to your Claude Code instance.

What It Does

Bold typography, distinctive color palettes, high-impact animations, responsive layouts. Claude picks an aesthetic direction automatically.

When to Use

Landing pages, dashboards, UI components, settings panels — any frontend work where visual quality matters.

Just describe what you need

Claude> Create a dashboard for a music streaming app
Claude> Build a landing page for an AI security startup
Claude> Design a settings panel with dark mode
Plugin Spotlight

GSD — Get Shit Done

A meta-prompting system that turns vague ideas into shipping code through structured research, planning, and verification.

The Problem It Solves

Context rot — quality degrades as Claude's context fills up during long development sessions. GSD keeps each task in a fresh 200K context.

Install

npx get-shit-done-cc@latest

Interactive setup: choose Claude Code runtime and global vs local install.

When to Use

Building features from scratch, large multi-phase projects, architectural decisions requiring research, anything too big for a single Claude session.

How It Works

Spawns specialized sub-agents for research, planning, and execution. Each gets fresh context. Atomic Git commits per task. Parallel wave execution.

Plugin Spotlight

GSD Workflow

Init
Discuss
Plan
Execute
Verify
Complete
CommandPurpose
/gsd:new-projectInitialize — questions, research, requirements, roadmap
/gsd:discuss-phase NShape implementation approach before planning
/gsd:plan-phase NResearch domain, create atomic task plans, verify against goals
/gsd:execute-phase NBuild in parallel waves — fresh context per task
/gsd:verify-work NManual UAT with automated debugging
/gsd:quickAd-hoc tasks (bug fixes, small features) — skip full ceremony
/gsd:progressCheck current status and next action

For existing codebases: run /gsd:map-codebase first so GSD learns your patterns before planning.

Hands-On

Exercise 5: Plugins & Agents

1. Install Frontend Design

claude plugin add frontend-design

Then ask: Create a simple landing page for a coffee shop

2. Install GSD

npx get-shit-done-cc@latest

Then try: /gsd:help to see all available commands.

3. Try GSD Quick Mode

Run /gsd:quick and give it a small task like adding a utility function or fixing a comment.

4. Explore Sub-Agents

Ask Claude a complex question and watch the status bar — notice when it spawns sub-agents to research in parallel.

Take 10 minutes. These tools multiply your productivity significantly.

Automation

Custom Slash Commands

Create repeatable workflows in .claude/commands/

# .claude/commands/review.md

Review the current git diff for:
1. Logic errors and edge cases
2. Missing error handling
3. Security vulnerabilities
4. Test coverage gaps

Format as a checklist with severity levels.

Invoke

/review — runs the command with full context.

Arguments

Use $ARGUMENTS placeholder for dynamic input: /review src/auth.ts

Advanced

Hooks — Event-Driven Automation

Shell scripts triggered by Claude Code events. Zero token cost.

PreToolUse

Runs before Claude uses a tool. Can block dangerous operations.

PostToolUse

Runs after a tool completes. Auto-format, auto-lint, inject context.

UserPromptSubmit

Runs when you send a message. Inject git status, branch info, metadata.

# settings.json
"hooks": {
  "PostToolUse": [{
    "matcher": "Edit",
    "command": "npx prettier --write \"$FILEPATH\""
  }]
}
Advanced

Hook Examples

Auto-Format on Save

"PostToolUse": [{
  "matcher": "Edit",
  "command": "prettier --write $F"
}]

Security Check

"PreToolUse": [{
  "matcher": "Bash",
  "command": "check-no-secrets.sh"
}]

Block commands that might leak secrets.

Git Context Injection

"UserPromptSubmit": [{
  "command": "git-context.sh"
}]

Auto-inject branch name and recent commits into every prompt.

Hooks run as shell scripts outside the LLM — they consume zero tokens and execute instantly.

Advanced

MCP Servers

Model Context Protocol — external tool integration for Claude Code.

Serena

Code-aware navigation. Understands symbols, references, types.

Context7

Up-to-date library documentation. No hallucinated APIs.

Playwright

Browser automation. Test UIs, scrape data, interact with web apps.

PostgreSQL

Direct database access. Query, inspect schema, debug data issues.

# settings.json
"mcpServers": {
  "context7": { "command": "npx", "args": ["-y", "@context7/mcp"] }
}

Each MCP server adds tool definitions to context. Only enable servers you actively use.

Configuration

Configuration Decision Tree

Where should this rule or behavior live?

CLAUDE.md

Needed every session?
Project conventions, build commands, style rules.

Hook

Should auto-trigger on events?
Format on save, inject context, security gates.

MCP Server

Needs external access?
Databases, APIs, browsers, documentation.

Command

Repeatable workflow?
Review, deploy, test suites, migrations.

Agent

Needs isolated context?
Code review, debugging, architecture analysis.

Skill

Shared knowledge?
Cross-project patterns, domain expertise.

Prompting

Structured Prompting with XML Tags

For complex requests, use XML tags to organize your prompt:

<instruction>
Refactor the payment processing module to support Stripe webhooks.
</instruction>

<context>
- Current payment flow is in src/payments/
- We use Stripe API v2023-10
- Webhook endpoint should be POST /api/webhooks/stripe
</context>

<constraints>
- Must be backward compatible with existing orders
- Add idempotency checks for duplicate events
- Include comprehensive error handling
</constraints>

<output>
Create the webhook handler, add tests, update API routes.
</output>
Prompting

Effective Prompting

Weak Prompt

“Fix the auth bug”

Strong Prompt

“Fix the JWT validation in src/auth/middleware.ts — tokens with expired refresh claims are not returning 401. Add test coverage.”

The WHAT / WHERE / HOW / VERIFY Pattern

WHAT

What needs to happen? Be specific about the outcome.

WHERE

Which files, modules, functions? Point Claude to the right place.

HOW

Constraints, patterns, conventions to follow.

VERIFY

How to confirm success? Tests, commands, expected behavior.

Key Insight

The AI Feedback Loop

AI-assisted development is only as fast as your build → test → verify cycle. If Claude can't see the result, it can't improve it.

Write
Build
Test
Verify
Fix
Repeat

Fast Cycle = High Quality

If build + test takes 5 seconds, Claude can iterate 10+ times in a session. It reads errors, fixes, and retries autonomously.

Slow Cycle = Guessing

If build takes 2+ minutes, Claude can only try 2–3 times before context fills up. It starts guessing instead of verifying.

No Cycle = Blind

If Claude can't run your tests at all, it's writing code without feedback. Quality drops dramatically.

The #1 investment for AI-assisted dev: make your feedback loop as fast as possible.

Capabilities

Claude Does More Than Write Code

Claude's verification loop is its superpower. It doesn't just generate — it runs, reads output, and fixes.

Runs Your Build

mvn compile, gradle build — reads compiler errors, fixes them, rebuilds. Iterates until green.

Runs Your Tests

Executes mvn test, parses failures, fixes the code or the test, reruns. Full TDD loop.

Reads Logs & Stack Traces

Paste a stack trace or point to a log file. Claude traces the root cause through your codebase.

Refactors Safely

Rename, extract, move — across files. Then runs tests to verify nothing broke. Safer than IDE refactoring for complex changes.

Writes & Fixes Tests

Generates unit tests, integration tests, edge cases. If a test fails, it determines whether the test or the code is wrong.

Reviews & Explains

Code reviews, architecture explanations, onboarding docs. Ask Explain the request lifecycle in this Spring Boot app.

For Java Devs

Optimizing for AI — Spring Boot Tips

Speed Up Your Feedback Loop

  • Use Spring Boot DevTools for hot reload
  • Run single test classes, not the full suite:
    mvn test -pl module -Dtest=MyTest
  • Use Gradle with build cache for faster incremental builds
  • Keep modules small — smaller scope = faster compile + test

Put This in Your CLAUDE.md

## Commands
- Build: `mvn compile -q`
- Test: `mvn test -q`
- Single test: `mvn test -Dtest=$CLASS`
- Run: `mvn spring-boot:run`

## Conventions
- Java 17+, Spring Boot 3.x
- Use constructor injection
- Tests with JUnit 5 + Mockito
- Always run tests after changes

Think Small for AI

  • Microservices > monolith for AI dev — faster builds, clearer scope
  • Standalone scripts (Python, Node) for prototyping — instant feedback
  • Consider separate modules for logic Claude works on most

Consider Fast Languages for New Tools

  • Python — instant feedback, no compile, huge AI ecosystem
  • TypeScript/Node — fast build, great for APIs & CLIs
  • Go — compiles in seconds, single binary, strong stdlib
  • Keep Java/Gosu for your core platform — use fast languages for tooling, scripts, prototypes
Avoid

8 Beginner Mistakes

1. Skipping Plan

Jumping straight to coding without understanding the problem.

2. Ignoring Context

Letting context fill up until Claude forgets what you are building.

3. Vague Prompts

“Fix it” instead of specifying what, where, and how to verify.

4. Accepting Blindly

Approving every change without reading the diffs.

5. No Version Control

Making changes without committing checkpoints.

6. Broad Permissions

Starting with full auto mode before learning the tool.

7. Mixing Tasks

Combining unrelated work in one session, polluting context.

8. Chatbot Mentality

Having a conversation instead of giving task-oriented instructions.

Essential

The 5 Golden Rules

1

Always Review Diffs

Read every change before accepting. Claude is capable but not infallible.

2

Use /compact Proactively

Do not wait for context warnings. Compact after completing each sub-task.

3

Be Specific in Prompts

Include what, where, how, and verification criteria in every request.

4

Start with Plan Mode

Explore and understand before making changes. /plan is your best friend.

5

Build Your CLAUDE.md

Invest in your memory file. It compounds over time and makes Claude smarter for your project.

Hands-On

Exercise 6: Write a Great Prompt

Bad Prompt

Fix the tests

Good Prompt

Fix the failing unit test in
src/utils/date.test.ts — the
formatDate function returns UTC
but the test expects local time.
Keep the existing test structure.

Your Turn

Pick a real task in your project. Write a prompt that includes: (1) what to change, (2) which files, (3) expected behavior, (4) constraints. Run it and compare the result to a vague version of the same request.

Take 5 minutes. The difference in output quality will surprise you.

Mindset

Trust Calibration

Not all code changes deserve the same level of scrutiny.

Change TypeReview LevelApproach
Boilerplate / scaffoldingSkimQuick check that structure is right
Standard featuresReviewRead the diff, verify logic
Business logicDeep ReviewLine-by-line, check edge cases
Security / auth / paymentsMaximumDeep review + security tools + tests
“Treat AI like a capable junior developer: talented and fast, but needs oversight on critical paths.”
Practice

Daily Workflow

Morning Plan
Code with Claude
Review
Commit
Handoff

Morning Plan

Start with /plan. Review what needs to be done. Break into discrete tasks.

Code with Claude

One task per session. Be specific. Use /compact between sub-tasks.

Review & Commit

Review diffs carefully. Run tests. Commit after each completed feature.

End of Day Handoff

Update CLAUDE.md with discoveries. Note open issues. Clean context for tomorrow.

Practice

Migration Checklist

Week 1: Learn

  • Install and authenticate
  • Complete 3 small tasks
  • Practice /plan and /compact
  • Create minimal CLAUDE.md
  • Use default permissions only

Week 2: Establish

  • Define daily workflow
  • Add 5-10 CLAUDE.md rules
  • Create first custom command
  • Try auto-accept edits
  • Experiment with model selection

Week 3-4: Advance

  • Create specialized agents
  • Set up hooks for auto-formatting
  • Configure MCP servers
  • Mentor a colleague
  • Measure productivity gains
Advanced

Context as a System

The shift from chatbot to context system. Four layers working together:

Layer 1: CLAUDE.md

Always Loaded

Persistent memory. Project rules, conventions, commands. The foundation.

Layer 2: Skills

On Demand

Domain knowledge modules. Loaded when relevant to the current task.

Layer 3: Hooks

Automatic

Event-driven scripts. Zero tokens. Auto-format, inject context, enforce rules.

Layer 4: Memory

Compounding

Every correction becomes a rule. The system gets smarter over weeks and months.

Advanced

Multi-Instance Workflows

You are the main thread. Orchestrate 2-5 Claude instances on independent tasks.

Terminal 1

claude — Backend API refactor
Working on src/api/...

Terminal 2

claude — Frontend components
Working on src/components/...

Terminal 3

claude — Test suite expansion
Working on tests/...

You (Orchestrator)

  • Review outputs from each instance
  • Resolve merge conflicts
  • Ensure consistency across changes
  • Make architectural decisions

Works best when tasks touch different files. Merge carefully when they overlap.

Summary

Key Takeaways

1

Context is everything. Monitor it, manage it, compact it. This is the #1 skill.

2

Plan before you code. Start with /plan. Understand before you change.

3

Build CLAUDE.md over time. Never correct Claude twice for the same mistake.

4

Always review diffs. Trust but verify. AI is a capable junior developer.

5

Be specific in prompts. WHAT + WHERE + HOW + VERIFY = great results.

Resources

Resources & Further Reading

Official Documentation

  • docs.anthropic.com/claude-code
  • Claude Code GitHub repository
  • Anthropic Cookbook examples

Community

  • Anthropic Discord — #claude-code channel
  • GitHub Discussions
  • Community CLAUDE.md examples

Key Blog Posts

  • “Claude Code Best Practices” — Anthropic
  • “How I Use Claude Code” — Boris Cherny
  • “Context Management Guide”

Enterprise

  • Enterprise deployment guide
  • AWS Bedrock integration
  • Usage monitoring & cost controls
Enterprise Training Program

Thank You

Questions, discussion, and next steps

$ claude
Claude> Ready to help. What are we building?  

Built with care for enterprise teams adopting AI-powered development.